Comments (25)
This request, together with #2, has been open for quite a while. Like the OP mentioned, doing full-scan on the logfiles is a very heavy operation when restarting fail2ban. Especially on shared-servers with multiple independent log-files (even with a daily rotation, those logs exceed 1GB easily).
Would an additional option such as 'search_logs_on_start' (true/false) be an option, to optionally disregard the current content in those logs and only act on new content (as a 'tail') as it appears in the logs? I could greatly speed up the start/restart time of f2b.
from fail2ban.
findtime = 15
bantime = 5
maxtretry = 2
(timestamps in 'seconds ago')
12 - failure
6 - failure [banned]
[restart f2b]
[doesn't get re-banned, even though it would still be banned if f2b hadn't been restarted]
from fail2ban.
Right, f2b should look back up to 'findtime' back, not 'bantime' back, since 'findtime' is the window for failure occurrences to count, while 'bantime' is the time the IP stays banned. 'findtime' is the correct scanning window for this.
from fail2ban.
Why would it be, though? If the goal is to re-ban IPs that was banned at the time of restart (using instant shutdown -> start)? Using only findtime creates a window where the IP was banned at shutdown, but will not be re-banned at startup.
from fail2ban.
Sorry, change findtime and bantime to 10 in the example above - IP will not be re-banned.
from fail2ban.
You are correct, I wasn't doing the math properly - findtime is a running window under normal operations, but in my previous comment is being applied as a fixed window. The correct window to use is findtime+bantime.
With findtime+bantime, the scan goes back to the correct initial time for when an IP that should still be banned would be. (The idea here is that if f2b were restarted immediately, it would reban all the same IPs. If f2b were shut down for a while, it would only reban those IPs whose ban would not have expired during the shutdown, i.e. to restore the same state as if f2b were never shut down.)
Of course, this isn't a perfect solution - IPs that get banned during startup in this manner will have been banned for longer than normal because their ban is effectively starting over. If we want to truly "restore state" as if f2b were never shut down then IPs banned during startup should be banned for bantime minus already-banned time (which can be determined from the timestamp of the failure that causes their re-ban on startup, which should be the same as the timestamp of their last ban in the fail2ban log).
Speaking of which... why bother reading all the original logs again? Why not just read fail2ban.log (or whatever the log destination is for fail2ban, e.g. syslog, piped through grep fail2ban) instead? You'd go back the same time (findtime+bantime) and you'd just look for all "Ban" lines without a corresponding "Unban" line. Then, reban those IPs. The fail2ban log is almost certainly going to be smaller than the original error log and you can read it just once (storing it in memory for processing all jails), instead of reading multiple log files. (You can make this even faster by just grepping for Ban and Unban lines, ignoring all other lines.)
from fail2ban.
Agree with using findtime+bantime...I thought I mentioned this somewhere before, but not sure where - and understand the caveat, and we agree with it being acceptable as they deserved to be banned before, why not ban them a little longer under this circumstance.
Interesting point with using the fail2ban log, but what if it's STDOUT? May not be used in practice much, but is a valid value for the logtarget - based on python's logging and the comments in fail2ban.conf.
from fail2ban.
concur -- findtime + bantime ;-) and yeap -- we discussed it somewhere and you, @leeclemens, got it right that time and I didn't ;-)
from fail2ban.
@leeclemens : You are correct that looking at the fail2ban log won't work if fail2ban is logging to STDOUT. However, it would be a much "cleaner" and faster solution to do it that way, rather than re-reading all the log files. Perhaps STDOUT logging should be disabled? =) Alternatively, the "reban on restart" could be disabled if STDOUT is the log destination. It's perfectly legitimate to have some features be incompatible with one another, though I agree that it can be avoided by re-reading all the old logs. (Unless they've been rotated between shutdown and restart...)
from fail2ban.
Looking at the source code, this seems to be solveable by looking into the files server/filter.py
in the class FileContainer
and the open
definition. That's responsible for opening log-files and retaining the position of the cursor on that file in memory.
The fix could be done by discarding any self.__pos
that equals 0 by looping until the end of the file is then reached. It would mean that a logrotate would discard any content that happens to be on the first line, since it would reset self.__pos
back to 0 and continue to loop the file until the end.
One catch could be that empty files never have a self.__pos
larger than 0 and could cause infinite loops.
Is there any interest in picking this up? Since the issue has been open for quite a while? Perhaps often-contributer @grooverdan could have a peak? :-)
My Python skills are unfortunately lacking to solve it -- tried but failed at any attempt.
from fail2ban.
Would an additional option such as 'search_logs_on_start' (true/false) be an option
yes this sounds good.
Another thing I noticed is it matches all the filer regexes before it checks the time and then decides to discard it if old. This is exceptionally wasteful and could perhaps could increase the efficiency enough.
Code for reading on start is in server/filter.py:FileFilter.getFailures at the end.
Is there any interest in picking this up?
Yes it is coming up commonly as a problem for users.
Since the issue has been open for quite a while? Perhaps often-contributer @grooverdan could have a peak? :
Perhaps.
from fail2ban.
Did some tests as to how much time the failregex matching takes. Really only saved 13 seconds over ~3 mins. Will profile these some more later.
mail.log: size 394246361 test.log
Check date/time before regex.
time ./fail2ban-regex test.log config/filter.d/postfix-sasl.conf Running tests ============= Use failregex file : config/filter.d/postfix-sasl.conf Use log file : test.log Results ======= Failregex: 0 total Ignoreregex: 0 total Date template hits: |- [# of hits] date format | [2313872] MONTH Day Hour:Minute:Second `- Lines: 2313872 lines, 0 ignored, 0 matched, 2313872 missed Missed line(s):: too many to print. Use --print-all-missed to print all 2313872 lines real 2m42.447s user 2m30.381s sys 0m11.724s
Checking time after checking failregexs
$ time ./fail2ban-regex test.log config/filter.d/postfix-sasl.conf Running tests ============= Use failregex file : config/filter.d/postfix-sasl.conf Use log file : test.log Results ======= Failregex: 10153 total |- #) [# of hits] regular expression | 1) [10153] ^\s*(<[^.]+\.[^.]+>)?\s*(?:\S+ )?(?:kernel: \[ *\d+\.\d+\] )?(?:@vserver_\S+ )?(?:(?:\[\d+\])?:\s+[\[\(]?postfix/smtpd(?:\(\S+\))?[\]\)]?:?|[\[\(]?postfix/smtpd(?:\(\S+\))?[\]\)]?:?(?:\[\d+\])?:?)?\s(?:\[ID \d+ \S+\])?\s*warning: [-._\w]+\[\]: SASL (?:LOGIN|PLAIN|(?:CRAM|DIGEST)-MD5) authentication failed(: [ A-Za-z0-9+/]*={0,2})?\s*$ `- Ignoreregex: 0 total Date template hits: |- [# of hits] date format | [2313872] MONTH Day Hour:Minute:Second `- Lines: 2313872 lines, 0 ignored, 10153 matched, 2303719 missed Missed line(s):: too many to print. Use --print-all-missed to print all 2303719 lines real 2m55.890s user 2m43.736s sys 0m11.753s
from fail2ban.
in fail2ban-regex
import cProfile cProfile.run('fail2banRegex.process(test_lines)')
./fail2ban-regex test.log config/filter.d/postfix-sasl.conf
ncalls tottime percall cumtime percall filename:lineno(function) 1 0.000 0.000 238.085 238.085 :1() 5090520 2.312 0.000 8.494 0.000 __init__.py:1127(debug) 4637897 3.598 0.000 9.576 0.000 __init__.py:1198(log) 9728417 5.846 0.000 5.846 0.000 __init__.py:1328(getEffectiveLevel) 9728417 5.546 0.000 11.392 0.000 __init__.py:1342(isEnabledFor) 1 0.000 0.000 0.000 0.000 _strptime.py:103(__calc_am_pm) 1 0.000 0.000 0.000 0.000 _strptime.py:115(__calc_date_time) 1 0.000 0.000 0.002 0.002 _strptime.py:12() 1 0.000 0.000 0.000 0.000 _strptime.py:160(__calc_timezone) 1 0.000 0.000 0.000 0.000 _strptime.py:176(TimeRE) 1 0.000 0.000 0.001 0.001 _strptime.py:179(__init__) 5 0.000 0.000 0.000 0.000 _strptime.py:212() 6 0.000 0.000 0.000 0.000 _strptime.py:221(__seqToRE) 50 0.000 0.000 0.000 0.000 _strptime.py:236() 4 0.000 0.000 0.001 0.000 _strptime.py:240(pattern) 1 0.000 0.000 0.002 0.002 _strptime.py:263(compile) 2313874 1.388 0.000 16.137 0.000 _strptime.py:27(_getlang) 2313872 31.872 0.000 64.578 0.000 _strptime.py:295(_strptime) 1 0.000 0.000 0.000 0.000 _strptime.py:31(LocaleTime) 2313872 2.089 0.000 66.668 0.000 _strptime.py:466(_strptime_time) 1 0.000 0.000 0.000 0.000 _strptime.py:50(__init__) 1 0.000 0.000 0.000 0.000 _strptime.py:88(__calc_weekday) 1 0.000 0.000 0.000 0.000 _strptime.py:96(__calc_month) 1 0.000 0.000 0.000 0.000 calendar.py:126(Calendar) 1 0.000 0.000 0.000 0.000 calendar.py:132(__init__) 1 0.000 0.000 0.000 0.000 calendar.py:138(setfirstweekday) 1 0.000 0.000 0.000 0.000 calendar.py:21(IllegalMonthError) 1 0.000 0.000 0.000 0.000 calendar.py:259(TextCalendar) 1 0.000 0.000 0.000 0.000 calendar.py:28(IllegalWeekdayError) 1 0.000 0.000 0.000 0.000 calendar.py:376(HTMLCalendar) 1 0.000 0.000 0.000 0.000 calendar.py:47(_localized_month) 1 0.000 0.000 0.000 0.000 calendar.py:488(TimeEncoding) 2 0.000 0.000 0.000 0.000 calendar.py:50() 1 0.000 0.000 0.000 0.000 calendar.py:501(LocaleTextCalendar) 2 0.000 0.000 0.000 0.000 calendar.py:52(__init__) 1 0.000 0.000 0.000 0.000 calendar.py:536(LocaleHTMLCalendar) 26 0.000 0.000 0.000 0.000 calendar.py:55(__getitem__) 1 0.000 0.000 0.000 0.000 calendar.py:6() 1 0.000 0.000 0.000 0.000 calendar.py:66(_localized_day) 2 0.000 0.000 0.000 0.000 calendar.py:71(__init__) 14 0.000 0.000 0.000 0.000 calendar.py:74(__getitem__) 2313872 8.656 0.000 19.359 0.000 datedetector.py:183(matchTime) 2313872 8.415 0.000 117.191 0.000 datedetector.py:196(getTime) 2313872 2.055 0.000 126.470 0.000 datedetector.py:212(getUnixTime) 231388 1.592 0.000 9.444 0.000 datedetector.py:220(sortTemplate) 4859158 3.909 0.000 5.631 0.000 datedetector.py:224() 2313872 0.360 0.000 0.360 0.000 datetemplate.py:120(getPattern) 2313875 12.648 0.000 102.951 0.000 datetemplate.py:134(getDate) 4859132 0.827 0.000 0.827 0.000 datetemplate.py:47(getName) 9949704 1.203 0.000 1.203 0.000 datetemplate.py:60(getHits) 2313872 1.038 0.000 1.038 0.000 datetemplate.py:63(incHits) 4627750 2.278 0.000 7.955 0.000 datetemplate.py:69(matchDate) 10153 0.006 0.000 0.006 0.000 fail2ban-regex:138(inc) 10153 0.005 0.000 0.006 0.000 fail2ban-regex:147(appendIP) 2313872 1.811 0.000 3.606 0.000 fail2ban-regex:229(testIgnoreRegex) 2313872 2.998 0.000 212.369 0.000 fail2ban-regex:241(testRegex) 1 9.651 9.651 238.085 238.085 fail2ban-regex:260(process) 10153 0.015 0.000 0.022 0.000 failregex.py:121(getHost) 2313872 2.087 0.000 18.395 0.000 failregex.py:72(search) 2313872 0.555 0.000 0.555 0.000 failregex.py:80(hasMatched) 2313872 13.189 0.000 209.353 0.000 filter.py:287(processLine) 4627744 3.868 0.000 3.868 0.000 filter.py:342(ignoreLine) 2313872 7.388 0.000 33.453 0.000 filter.py:356(findFailure) 10153 0.006 0.000 0.024 0.000 filter.py:644(searchIP) 10153 0.013 0.000 0.030 0.000 filter.py:656(isValidIP) 10153 0.027 0.000 0.086 0.000 filter.py:668(textToIp) 2313874 6.507 0.000 10.337 0.000 locale.py:347(normalize) 2313874 1.489 0.000 11.826 0.000 locale.py:415(_parse_localename) 2313874 1.871 0.000 14.750 0.000 locale.py:514(getlocale) 2313872 1.256 0.000 1.594 0.000 mytime.py:57(time) 2313872 1.346 0.000 3.682 0.000 mytime.py:70(gmtime) 9 0.000 0.000 0.002 0.000 re.py:188(compile) 44 0.000 0.000 0.000 0.000 re.py:204(escape) 9 0.000 0.000 0.002 0.000 re.py:226(_compile) 8 0.000 0.000 0.000 0.000 re.py:248(_compile_repl) 8 0.000 0.000 0.000 0.000 re.py:268(_subx) 21 0.000 0.000 0.001 0.000 sre_compile.py:179(_compile_charset) 21 0.000 0.000 0.000 0.000 sre_compile.py:208(_optimize_charset) 14 0.000 0.000 0.000 0.000 sre_compile.py:25(_identityfunction) 1 0.000 0.000 0.000 0.000 sre_compile.py:259(_mk_bitmap) 37/3 0.000 0.000 0.001 0.000 sre_compile.py:33(_compile) 3 0.000 0.000 0.000 0.000 sre_compile.py:355(_simple) 3 0.000 0.000 0.000 0.000 sre_compile.py:362(_compile_info) 6 0.000 0.000 0.000 0.000 sre_compile.py:475(isstring) 3 0.000 0.000 0.001 0.000 sre_compile.py:481(_code) 3 0.000 0.000 0.002 0.001 sre_compile.py:496(compile) 40 0.000 0.000 0.000 0.000 sre_compile.py:52(fixup) 29 0.000 0.000 0.000 0.000 sre_parse.py:127(__len__) 125 0.000 0.000 0.000 0.000 sre_parse.py:131(__getitem__) 3 0.000 0.000 0.000 0.000 sre_parse.py:135(__setitem__) 75 0.000 0.000 0.000 0.000 sre_parse.py:139(append) 40/6 0.000 0.000 0.000 0.000 sre_parse.py:141(getwidth) 5 0.000 0.000 0.000 0.000 sre_parse.py:179(__init__) 196 0.000 0.000 0.000 0.000 sre_parse.py:183(__next) 95 0.000 0.000 0.000 0.000 sre_parse.py:196(match) 140 0.000 0.000 0.000 0.000 sre_parse.py:202(get) 5 0.000 0.000 0.000 0.000 sre_parse.py:211(isident) 5 0.000 0.000 0.000 0.000 sre_parse.py:217(isname) 5 0.000 0.000 0.000 0.000 sre_parse.py:226(_class_escape) 10 0.000 0.000 0.000 0.000 sre_parse.py:258(_escape) 9/3 0.000 0.000 0.001 0.000 sre_parse.py:302(_parse_sub) 29/3 0.000 0.000 0.001 0.000 sre_parse.py:380(_parse) 3 0.000 0.000 0.001 0.000 sre_parse.py:676(parse) 3 0.000 0.000 0.000 0.000 sre_parse.py:68(__init__) 2 0.000 0.000 0.000 0.000 sre_parse.py:704(parse_template) 3 0.000 0.000 0.000 0.000 sre_parse.py:711(literal) 6 0.000 0.000 0.000 0.000 sre_parse.py:73(opengroup) 6 0.000 0.000 0.000 0.000 sre_parse.py:84(closegroup) 37 0.000 0.000 0.000 0.000 sre_parse.py:91(__init__) 2313872 1.228 0.000 5.914 0.000 utf_8.py:15(decode) 2313872 4.687 0.000 4.687 0.000 {_codecs.utf_8_decode} 2313874 1.053 0.000 1.053 0.000 {_locale.setlocale} 10153 0.008 0.000 0.008 0.000 {_socket.inet_aton} 3 0.000 0.000 0.000 0.000 {_sre.compile} 83 0.000 0.000 0.000 0.000 {_sre.getlower} 1 0.000 0.000 0.000 0.000 {chr} 4859158 0.544 0.000 0.544 0.000 {cmp} 6951945 2.047 0.000 2.047 0.000 {isinstance} 6952417/6952408 1.097 0.000 1.097 0.000 {len} 25 0.000 0.000 0.000 0.000 {max} 1 0.000 0.000 0.000 0.000 {method '__getitem__' of 'dict' objects} 4859132 1.635 0.000 1.635 0.000 {method 'acquire' of 'thread.lock' objects} 4648592 0.526 0.000 0.526 0.000 {method 'append' of 'list' objects} 2313872 2.600 0.000 8.514 0.000 {method 'decode' of 'str' objects} 1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects} 4627744 0.625 0.000 0.625 0.000 {method 'end' of '_sre.SRE_Match' objects} 2 0.000 0.000 0.000 0.000 {method 'extend' of 'list' objects} 6941661 1.186 0.000 1.186 0.000 {method 'get' of 'dict' objects} 4648050 1.140 0.000 1.140 0.000 {method 'group' of '_sre.SRE_Match' objects} 2313872 3.874 0.000 3.874 0.000 {method 'groupdict' of '_sre.SRE_Match' objects} 2313872 4.622 0.000 4.622 0.000 {method 'index' of 'list' objects} 18 0.000 0.000 0.000 0.000 {method 'index' of 'str' objects} 1 0.000 0.000 0.000 0.000 {method 'insert' of 'list' objects} 3 0.000 0.000 0.000 0.000 {method 'items' of 'dict' objects} 2313872 0.403 0.000 0.403 0.000 {method 'iterkeys' of 'dict' objects} 50/6 0.000 0.000 0.000 0.000 {method 'join' of 'str' objects} 47 0.000 0.000 0.000 0.000 {method 'lower' of 'str' objects} 2313872 0.668 0.000 0.668 0.000 {method 'lower' of 'unicode' objects} 2324025 4.525 0.000 4.525 0.000 {method 'match' of '_sre.SRE_Pattern' objects} 4859132 1.101 0.000 1.101 0.000 {method 'release' of 'thread.lock' objects} 6 0.000 0.000 0.000 0.000 {method 'remove' of 'list' objects} 4627815 1.174 0.000 1.174 0.000 {method 'replace' of 'str' objects} 2313872 2.117 0.000 2.117 0.000 {method 'rstrip' of 'unicode' objects} 6941622 21.985 0.000 21.985 0.000 {method 'search' of '_sre.SRE_Pattern' objects} 231388 1.255 0.000 6.886 0.000 {method 'sort' of 'list' objects} 10153 0.009 0.000 0.009 0.000 {method 'split' of 'str' objects} 2313872 0.600 0.000 0.600 0.000 {method 'start' of '_sre.SRE_Match' objects} 2313872 1.469 0.000 1.469 0.000 {method 'startswith' of 'str' objects} 38 0.000 0.000 0.000 0.000 {method 'strftime' of 'datetime.date' objects} 2313872 0.721 0.000 0.721 0.000 {method 'strip' of 'str' objects} 8 0.000 0.000 0.000 0.000 {method 'sub' of '_sre.SRE_Pattern' objects} 4627745 0.701 0.000 0.701 0.000 {method 'toordinal' of 'datetime.date' objects} 2313874 0.484 0.000 0.484 0.000 {method 'translate' of 'str' objects} 2313872 0.358 0.000 0.358 0.000 {method 'weekday' of 'datetime.date' objects} 99 0.000 0.000 0.000 0.000 {min} 76 0.000 0.000 0.000 0.000 {ord} 17 0.000 0.000 0.000 0.000 {range} 6 0.000 0.000 0.000 0.000 {sorted} 1 0.000 0.000 0.000 0.000 {thread.allocate_lock} 2313872 2.336 0.000 2.336 0.000 {time.gmtime} 4627744 18.051 0.000 18.051 0.000 {time.mktime} 8 0.000 0.000 0.000 0.000 {time.strftime} 2313872 3.193 0.000 69.863 0.000 {time.strptime} 2313872 0.338 0.000 0.338 0.000 {time.time} 1 0.000 0.000 0.000 0.000 {time.tzset}
from fail2ban.
as we can see the time.strptime that is used to parse the time of every log is one of the largest cpu time consumers. The search of _sre.STR_Pattern also consumes a bit of time finding the time in the log to match. As such to quickly search through a log file to the right position, either forwards or backwards, will need some linear prediction to estimate the right position in a file to seek too.
from fail2ban.
On Tue, 26 Nov 2013, Daniel Black wrote:
as we can see the time.strptime that is used to parse the time of every
log is one of the largest cpu time consumers. The search of sre.STRPattern
also consumes a bit of time finding the time in the log to match. As such
to quickly search through a log file to the right position, either
forwards or backwards, will need some linear prediction to estimate the
right position in a file to seek too.
we would need a binary search... if we allow to load the whole file into
the memory -- should be very easy, but doing so probably would be
undesired. Thus we should adopt one of the solutions from e.g.
suggested elsewhere
http://stackoverflow.com/questions/8369175/binary-search-over-a-huge-file-with-unknown-line-length
to just seek through the file while aligning to the closest EOL,
extracting date, and then deciding where to go (forward or backward),
thus quickly drilling down to the first log entry within max(findtime,
bantime) or just findtime (since bantime might be set infinitely large)
Yaroslav O. Halchenko, Ph.D.
http://neuro.debian.net http://www.pymvpa.org http://www.fail2ban.org
Senior Research Associate, Psychological and Brain Sciences Dept.
Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755
Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419
WWW: http://www.linkedin.com/in/yarik
from fail2ban.
BTW -- thanks @grooverdan for running the profiler -- I always wondered what are the bottlenecks but never got to run it myself ;) I wonder now if we shouldn't generate a little snippet which would dump/duplicate-few-times all our sample log lines (without comments) and just drill through them with all the available filters, while profiling this whole run, and set it up to run under travis. This way we would get up-to-date guestimate for bottlenecks done for us
from fail2ban.
FYI my time check before failregex is here: https://github.com/grooverdan/fail2ban/commit/40b70100b435333388e743edc798dd72fbc65908
Needs to rewrite a lot of the filter test cases because of it.
we would need a binary search
Could probably do better than binary since we've got dates at both ends and can approximate a line length.
Profiling, yes something should be done. I might test it out on a live system and see how that compares to this crude test (https://github.com/grooverdan/fail2ban/commit/d482a463c20dac44336ec31932cec58ce1fe205d)
from fail2ban.
On Tue, 26 Nov 2013, Daniel Black wrote:
FYI my time check before failregex is here: [1]grooverdan@40b7010
I can't believe that it was not the case already... although in the
global scheme of things it would not matter really since all new lines
are within findTime (unless upon scanning upon the startup)
Needs to rewrite a lot of the filter test cases because of it.
we would need a binary search
Could probably do better than binary since we've got dates at both ends
and can approximate a line length.
indeed -- could be even smarter than binary ;-) but it would not be
generally faster than log(n) thus "theoretically" the same ;-)
Profiling, yes something should be done. I might test it out on a live
system and see how that compares to this crude test
([2]grooverdan@d482a46)
ok
Yaroslav O. Halchenko, Ph.D.
http://neuro.debian.net http://www.pymvpa.org http://www.fail2ban.org
Senior Research Associate, Psychological and Brain Sciences Dept.
Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755
Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419
WWW: http://www.linkedin.com/in/yarik
from fail2ban.
if you want a look at an implementation I'd take a look at the source code for tail. I did a strace and can see it seeks around the place for the right point.
http://sourcecodebrowser.com/coreutils/8.13/tail_8c_source.html#l00478
from fail2ban.
Partial resolved by #480, as when adding a previously used log file to a jail it will seek to last position and then carry on reading. Ideal for short restarts, but on longer periods still means starting of reading of log file could be numerous lines away from a log line with a timestamp within the findtime
from fail2ban.
I'm running 0.8.6 from Debian Weezy. Fail2ban does not seem to be reading back to the bantime in the logs, so I was wondering what the status of this was. Also, if I ban stuff forever, say, with -1, what do you think should happen, how far back should it go? To the beginning of the log? To be honest, that's probably fine, at least in my case. Might take a few minutes to re-ban everyone but that's not outrageous. And I'm ok with the interpretation of 'forever' being to the beginning of the log rather than rebanning the addresses that I had in there before. However, a permanent ban file would be nice.
from fail2ban.
@mgrant0, you can try branch sebres / ban-time-incr issue #716 (possible will be a part of 0.9.2).
In this branch the problem you described should be already resolved;
from fail2ban.
@mgrant0 Checking 0.8.6 release, it should read the whole log file from the start, but it will only consider log lines that fall within findtime. 0.9.x releases can maintain bans over restarts, and such if configured so, leave bans in place forever.
Otherwise not much progress on this issue. I'm not sure this feature is even critical any more, given bans are maintained over restarts.
from fail2ban.
ah hah! findtime = 86400, now it's going back a whole day, excellent, thanks! I guess it won't go looking into compressed old log files, that would be to much. But the day is way better than nothing for me at the moment, 0.9.x will solve my other issues when it gets into weezy, thanks.
from fail2ban.
I believe this was implemented in 0.10 (known as seekToTime)...
I will close it now, just reopen if I'm wrong
from fail2ban.
Related Issues (20)
- [BR]: dpkg: warning: unable to delete old directory '/usr/lib/systemd/system': Directory not empty HOT 1
- [FR]: Fail2Ban stops functioning periodically without any evident reason HOT 1
- [BR]: ERROR: cannot import name 'MutableMapping' from 'collections' (/usr/lib/python3.11/collections/__init__.py) HOT 4
- [BR]: Jail works but no chain created in iptables HOT 4
- [BR]: STDIN is closed and triggers libuv error in external programs during actionban HOT 8
- [FR]: sshd failed login attempts not detected? HOT 1
- [RFE]: multi-line ignoreip doesn't handle end-of-line comments HOT 1
- [BR]: faulty regexes in sshd.conf? HOT 1
- Request new release HOT 1
- Active : failed HOT 1
- [FR]: sshd_filter not matching password authentication failed log line HOT 2
- [FR]: nginx-bad-request.conf nginx-botsearch.conf should also support the new journalctl format introduced in the other nginx filters
- New jail matches but doesn't ban nginx-limit-req.conf HOT 3
- [RFE]: Change cloudflare.conf to use WAF Custom Rules rather than Firewall Access Rules due to deprecation
- [BR]: basic setup fail HOT 2
- Not working filter apache logs HOT 1
- [FR]: qbittorrent-nox HOT 8
- [BR]: README.md typos
- Fail2ban - Raspberry Pi5 64bit Bookworm - not working as expected, not reading systemd logs? HOT 6
- [FR]: Ubuntu 22.04.4 LTS fail2ban Unable to match some authentication failure logs HOT 4
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from fail2ban.