vampjaz / autoidsserver Goto Github PK
View Code? Open in Web Editor NEWFrontend for automating IDSDeathBlossom using a web interface and Flask
Frontend for automating IDSDeathBlossom using a web interface and Flask
If the background thread crashes at any point, pcaps cannot be processed and the entire server needs to be restarted. We need to think of a better way to do this.
Perhaps spawning a new thread for every pcap would work. Have that thread acquire a global resource lock while it does the processing in order not to mess stuff up while processing multiple files.
They can get parsed by bash which could lead to the possibility of a code execution vulnerability...
Add a textbox to the main page to override the default rule sets and paste in some rules for analysis with. Maybe it would only become visible if a dropdown ruleset selection was set to "custom".
Update engines to match engines available in IDSDeathBlossom
Perhaps this could be as simple as uploading to cloudshark.
Not all of the logfiles are useful for the user to be able to see. Some of them can even be a little bit too revealing. I'm thinking we whitelist a certain list of filenames, like eve.json, fast.log, and whatever we think of.
Use Pygmentize to format the logfiles to be easier to read hopefully.
make a setup script to
other scripts
If this is a public webapp with no account control, there needs to be a way to prevent people from uploading tons of junk to the server, taking up disk space and processing power.
Currently the pagination on the pcap list allows you to go to the next page even if there is no next page. Perhaps we could make it so the next button dissapears if there are more than 40 pcaps returned.:
c.execute('SELECT * FROM pcaps ORDER BY uploaded DESC LIMIT ? OFFSET ?',(40,40*(page-1))) # get 40 pcaps, skipping 40*page offset
If we were to actually return 41 pcaps and display the first 40 of them, we could test if there is a 41st in the list and that would indicate that there is at least one more in the list and then it would display the next page navigation.
Because of the length of logfiles and the difficulty in scrolling down the page to find them, it would be nice to have a page navigation sidebar in the logfile display. It would just link to an anchor on the page at each logfile header.
Currently the system prevents you from running the same file through the system twice because files are identified by a hash. Even if the settings are different, the system will prevent it from running again.
Going along with the refactoring that #2 will entail, we should add the ability to re-run a file with different settings without having to reupload. Perhaps a link from the page for that pcap, combined with a redirect from the main page if you upload a file with the same hash as another in the database.
The files that snort generates especially can reach in the tens of megabytes. The parsing of those with the syntax highlighter and the transfer of them over the network causes a lot of lag and makes the server hang completely.
I propose we make it so that up to a certain size, files are displayed in full. After that size, they will be truncated and a download link will be provided for the raw file, which should be hosted on a static server like apache to improve performance.
test
Would like the ability to be able to search the file archive by hash and keywords, maybe even to the point of searching through the logfile output to see if a certain event happened in any of the logs.
Checkbox when you upload that will make the file not show up on the main list. Only people with access to the link (which is an md5 of the file) would be able to view it. If they still have the file but forgot the link, they could reupload it and they would be taken to the page for that file because it is already in the system.
Because the processing takes long enough that we would not want to hang the webserver by running IDSdeathblossom during a web request, it is run in the background. Usually, a user will need to refresh the page manually a couple seconds after they submit to get the results. This could be either made into a simple HTML refresh after 5 seconds until the status is marked as complete, or the server sends the client a websocket message when the background thread finishes
Current issue: the files are identified by the file hash. If we want to upload the same file and run it through a different engine than the one that was selected, or perhaps a different rule set, this would cause issues as the system would simply say that it is already in the database. We need a way to differentiate between the files and the engines that are used on them, so this will not happen if the engine and rules are different when uploading a file for the second time.
Alternatively, each file can have links on the page to reprocess it with another engine or ruleset.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.