Giter Club home page Giter Club logo

Comments (41)

jptosso avatar jptosso commented on September 3, 2024 3

Hey, we have a new feature that should improve this, if you can build your own Caddy that would help testing:

XCADDY_GO_BUILD_FLAGS='-tags=memoize_builders' xcaddy build --with github.com/corazawaf/coraza-caddy/v2

If you are using Docker you can build it using

FROM caddy:2.7-builder
XCADDY_GO_BUILD_FLAGS='-tags=memoize_builders' xcaddy build --with github.com/corazawaf/coraza-caddy/v2

This implements a new patch that reuses the memory from PM and RX operators.

from coraza-caddy.

skixmix avatar skixmix commented on September 3, 2024 3

Hello,

I've tested the new feature with memoize_builders and performed the same experiment of #76 (comment). It now seems to go from 340 MB of RAM consumption to 540 MB with 10 sites, so it looks really promising!

Also, with this version #88 is no more an issue.

Thank you for the awesome work!
Simone

from coraza-caddy.

jcchavezs avatar jcchavezs commented on September 3, 2024 2

According to #76 (comment) we might need to memoize the aho-corasick dictionaries too.

from coraza-caddy.

anamba avatar anamba commented on September 3, 2024 2

More testing needed, but initial tests show a huge decrease in memory usage. Promising!

from coraza-caddy.

mholt avatar mholt commented on September 3, 2024 1

I don't know the exact situation in the code, of course, but there's two simple rules I try to abide by:

  1. Store all state in the module's struct itself; this will get garbage collected after a config reload. If it doesn't, then not "all state" is being stored in the module's struct.
  2. If you do need global state, or state that persists across reloads, but is based on configuration that may or may not be needed after a reload, use a UsagePool: https://pkg.go.dev/github.com/caddyserver/caddy/v2#UsagePool -- often in conjunction with a module implementing the CleanerUpper: https://pkg.go.dev/github.com/caddyserver/caddy/v2#CleanerUpper -- this allows modules to say when they're done with values as configs are cycled through, and Caddy & Go's GC will take care of the rest.

from coraza-caddy.

jcchavezs avatar jcchavezs commented on September 3, 2024 1

I believe you can do:

XCADDY_GO_BUILD_FLAGS="-ldflags '-w s' -trimpath -tags my_custom_tag" xcaddy build --with github.com/corazawaf/coraza-caddy/v2

cc @mholt

from coraza-caddy.

jcchavezs avatar jcchavezs commented on September 3, 2024 1

@skixmix more memory improvements are coming. I am working on another PR to lazily load regexes.

@jptosso maybe we can enable it but we need to make sure risks are well documented, although minimal.

from coraza-caddy.

jptosso avatar jptosso commented on September 3, 2024

Hey! Could you provide other metrics, like concurrent users and traffic information? Each WAF instance is just a few KBs of memory, it shouldn't create that much overhead.

It could be related to the garbage collector.

from coraza-caddy.

skixmix avatar skixmix commented on September 3, 2024

Hello,

Thank you, as of now I'm testing it on a VM running Ubuntu 20.04.6 LTS and not exposed to the internet, so there are no clients and no traffic. On the other VMs in production, I have Caddy without the WAF module installed and the same configurations occupy more or less 1,3 GB of RAM. Whenever I perform a Caddy reload or restart, it takes up some additional memory, goes up to 2GB and then releases it back to 1,3 GB.

This behaviour changes in the test VM with the coraza module installed, and what happens is what I described before.

from coraza-caddy.

jcchavezs avatar jcchavezs commented on September 3, 2024

I've been checking this issue but could not get my head around it.

Locally

When I build the binary and run it in local with the example caddyfile

# run httpbin in terminal 1
go run github.com/mccutchen/go-httpbin/v2/cmd/[email protected] -port 8081

# run caddy in terminal 2, make sure you change the host in caddyfile from httpbin to localhost
mage buildCaddy
./build/caddy run --config example/Caddyfile --adapter caddyfile

# run ab in terminal 3
ab -n 100000 -c 100 http://localhost:8080/

I get a caddy server going from 40MB to 93MB (checked with Activity Monitor)

Docker

If I run the same ab command but running the example in a container the memory can get up from 40MB to 5GB and slowly go down to 2.87GB (checked with docker stats), actually every curl localhost:8080/ increments the memory usage in 1.1MB. Both tests are similar, however the local one built the mac version and docker one uses the linux version (GOOS=linux).

# run example
go run mage.go buildExample runExample

# run ab
ab -n 100000 -c 100 http://localhost:8080/

image

Any clue on why this could happen cc @anuraaga @mholt?

from coraza-caddy.

mholt avatar mholt commented on September 3, 2024

To know for sure, you'll need to capture a memory profile.

You can get one from :2019/debug/pprof on your server and viewing the allocations.

You can use go tool pprof to view them interactively, though sometimes it can be obvious just looking at the raw dump. I like generating an SVG to see what call stacks are allocating the most memory.

Here's an example tutorial (skip the top part that talks about adding the pprof handlers, Caddy already does this for you, hence the pprof endpoint you can load): https://www.freecodecamp.org/news/how-i-investigated-memory-leaks-in-go-using-pprof-on-a-large-codebase-4bec4325e192/

from coraza-caddy.

jcchavezs avatar jcchavezs commented on September 3, 2024

from coraza-caddy.

mholt avatar mholt commented on September 3, 2024

Not really a fair comparison since the two are completely different platforms/OSes. Will need a profile to be sure.

from coraza-caddy.

skixmix avatar skixmix commented on September 3, 2024

Hello,

Thank you for your support. If it might be useful, what I'm observing is that each time I execute a reboot of the machine or a systemctl restart of the Caddy service, the RAM usage reverts to approximately 1 GB. After this operation, even if I execute multiple 'curl' requests on various sites, it remains stable and the usage does not increase by 1MB as it does in your situation. Additionally, as mentioned previously, I'm observing a rise in RAM utilization whenever I add a new site (or enable the WAF on a site where the directive was previously absent) and subsequently execute the rebuild/reload of Caddy configurations.

Caddy version: v2.6.4 h1:2hwYqiRwk1tf3VruhMpLcYTg+11fCdr8S3jhNAdnPy8=
Coraza Caddy version: 1.2.2

The VM is operating on vMware, not Docker, and on the same host there are two additional VMs without the Coraza module installed. All VMs share the same Caddy configurations (excluding the WAF directive, which is absent in the production VMs), and the same system:

  • OS: Ubuntu 20.04
  • RAM: 4 GB (1.4-1.6 GB used in production, same on the test machine, but increasing like said before)
  • CPU: 4 vCores (0.15 - 0.20 load average on production, 0.01 on the test machine)

from coraza-caddy.

jcchavezs avatar jcchavezs commented on September 3, 2024

from coraza-caddy.

skixmix avatar skixmix commented on September 3, 2024

You mean without the site-specific configuration?

Currently, I have this configuration on every site (I'm not truly customizing it, it was simply to have something where I could place a custom configuration if necessary):

#   ==============================  WAF STATUS  ====================================

# Detection Only Mode
SecRuleEngine DetectionOnly

#   ==============================  LOGGING  =====================================

# Enable Audit Engine - use RelevantOnly to avoid logging everything
SecAuditEngine RelevantOnly
# Log file
SecAuditLogStorageDir /waf/coraza/audit/
SecAuditLogFormat JSON
SecAuditLog /waf/coraza/audit/202.log

# Log transaction
SecAuditLogParts ABIFHZ
# Use a single file for logging.
SecAuditLogType Serial

#   ======================== DISABLED OWASP RULES =============================
SecRuleRemoveByID 920430
SecRuleRemoveByID 932236
SecRuleRemoveByID 942421
SecRuleRemoveByID 920272
SecRuleRemoveByID 901340

from coraza-caddy.

skixmix avatar skixmix commented on September 3, 2024

I tried replacing every site coraza_waf config with this one

        coraza_waf {
			include /waf/coraza/coraza.conf-recommended
			include /waf/coreruleset/crs-setup.conf.example
			include /waf/coreruleset/rules/*.conf
        }

without the site-specific configuration, and the problem persists even after reloading. The initial RAM usage was of 1.64 GB, which increased to 2.33 GB. After a few minutes, it released the memory, bringing it down to 1.88 GB. However, I'm unsure why it continues to retain that extra 200 MB of RAM.

from coraza-caddy.

skixmix avatar skixmix commented on September 3, 2024

Hello, is there any update on this issue?

from coraza-caddy.

jcchavezs avatar jcchavezs commented on September 3, 2024

Got the profiling (thanks @mholt for the heads up on the endpoint)

profile001

from coraza-caddy.

jcchavezs avatar jcchavezs commented on September 3, 2024

I think one issue here is that we are initializing one instance of WAF per site, meaning that regexes and dicts (used by aho-corasick) are being initialized many times even when it is the same content and it is immutable. One idea to overcome this could be to use a regex map and dictionary map, however we need to be careful as tinygo won't support synchronization (hence non concurrent map for tinygo). This way, if you span 100 WAF with CRS, the regexes are going to use the same one in memory cc @anuraaga

from coraza-caddy.

anuraaga avatar anuraaga commented on September 3, 2024

I think a map for caching by pattern can work ok. Since TinyGo only runs on one thread, no worries about concurrency. One issue is all regex then are memory leaks - a pattern of replacing the waf instance with a different one with different rules becomes questionable. It's not common so maybe it doesn't need to be supported, but I guess making cache opt-in with a build tag is safer

from coraza-caddy.

skixmix avatar skixmix commented on September 3, 2024

Hello,

If I got it right, there are two problems here:

  1. When using the WAF module with multiple sites, it consumes a lot of memory because of regex allocation (duplicate rules saved in separate memory blocks for each site).
  2. Every time we reload the Caddy configuration, the module appears to use up some RAM and then doesn't release all of it, even if nothing changed in the config.

Concerning the first problem, I can consider allocating additional RAM to the VM as a temporary solution. However, I'm unsure how to address the second issue. In the long run, it will take up all the RAM...
Do you have any updates or suggestions?

Thank you,
Simone

from coraza-caddy.

jptosso avatar jptosso commented on September 3, 2024

Hello and thank you for the detailed analysis. Indeed we are working on a solution, as described by JC, we will implement cache mechanisms to avoid duplication on dictionaries for pm and regexes for rx. It will take some time to design this as it is a sensitive change. We will use this issue as reference so we will keep you posted.

Multiple hosts in a single web server is an important use case for coraza.

from coraza-caddy.

jcchavezs avatar jcchavezs commented on September 3, 2024

@anuraaga how about we introduce a Close method on WAF where we delete all rules (and regexes) that can be used on reload. Is there any way to trigger a reload in a plugin @mholt ?

from coraza-caddy.

anuraaga avatar anuraaga commented on September 3, 2024

Having a Close is important, SGTM

from coraza-caddy.

jcchavezs avatar jcchavezs commented on September 3, 2024

Do you consider it as a breaking change @anuraaga @jptosso?

from coraza-caddy.

anuraaga avatar anuraaga commented on September 3, 2024

No, perhaps we could document it better (similar to what wazero does) but our interfaces are not for outside implementations. It's fine to keep that up and we can make it more explicit it needed.

from coraza-caddy.

skixmix avatar skixmix commented on September 3, 2024

Hello everyone,

thanks for the prompt and diligent responses. Will the upcoming 2.0 module version include a resolution for this issue? If yes, do you have any timeline in mind?

Currently, I have Caddy deployed in production for various sites, with the WAF temporarily disabled. As a workaround, I've implemented a server-side (Node.js Express) WAF. However, I'm eager to integrate Coraza on Caddy as soon as possible.

Thank you,
Simone

from coraza-caddy.

jcchavezs avatar jcchavezs commented on September 3, 2024

from coraza-caddy.

skixmix avatar skixmix commented on September 3, 2024

Hi,

If you could give this fix higher priority, it would be fantastic 👍
Alternatively, I would be willing to test a custom build tag. If we choose that route, will I need to compile Caddy in a specific manner?

from coraza-caddy.

mholt avatar mholt commented on September 3, 2024

Let me know how I can help :)

from coraza-caddy.

jcchavezs avatar jcchavezs commented on September 3, 2024

Store all state in the module's struct itself; this will get garbage collected after a config reload. If it doesn't, then not "all state" is being stored in the module's struct.

Is there any callback for when a module struct is going to be dismissed?

from coraza-caddy.

mholt avatar mholt commented on September 3, 2024

Good question: yes there is! :)

a module implementing the CleanerUpper: https://pkg.go.dev/github.com/caddyserver/caddy/v2#CleanerUpper -- this allows modules to say when they're done with values as configs are cycled through, and Caddy & Go's GC will take care of the rest.

from coraza-caddy.

jcchavezs avatar jcchavezs commented on September 3, 2024

Awesome! Thanks @mholt

from coraza-caddy.

jcchavezs avatar jcchavezs commented on September 3, 2024

@skixmix I opened this PR corazawaf/coraza#836. Please use that commit to give it a try and we see. You don't need to use a build tag.

Use something like:

xcaddy build --with github.com/corazawaf/coraza-caddy/v2@24ab5d92e9d71ebde31a34e0e86c595b568e79d3

from coraza-caddy.

skixmix avatar skixmix commented on September 3, 2024

Great! Thank you so much. I'll give it a try as soon as possible and provide you with an update.

Have a nice day,
Simone

from coraza-caddy.

skixmix avatar skixmix commented on September 3, 2024

Hello,

I have an update. What I did was to start with a clean Caddy setup, no site configured and no traffic. Then, I added 23 sites one by one, each one with its own Caddyfile and WAF configuration + OWASP CRS.

  • RAM consumption before starting: 340MB
  • RAM consumption after 10 sites: 1.20 GB
  • RAM consumption after 23 sites: 2.06 GB

Throughout the process, I noticed that the RAM usage was having peaks at 3 GB and gradually dropping successively.

  • to 1.8 GB (with 10 sites) and
  • to 2.8 GB (with 23 sites)

before reaching a stable consumption.


This is what happens when I add a new site or when I perform a reload of a site configuration (in this specific case, I added a new site).

Initial RAM consumption

image

During configuration reload

image

After configuration reload

image

After more or less 3 minutes

image

After 30 minutes

image


Production comparison

This is the current state of one of the VMs that I have running on one of my DCs in production, with more than 100 site configurations, same as the test ones, but without the WAF module:
image

--

Recap

The problem persists in such a manner that each configuration uses over 100 MB of RAM instead of the 8/9 MB used in production without the WAF, which is 10 times higher. Nevertheless, I believe that one of the issues has been resolved as the memory usage now decreases rather than continuously leaking indefinitely. 😄

from coraza-caddy.

mholt avatar mholt commented on September 3, 2024

You can find out what's using memory by capturing a profile. Caddy exposes HTTP endpoints for this at :2019/debug/pprof -- this is a simple HTML document with links to collect various profiles, either instantaneously or over a time period if you add query params. Capture the heap profile then use go tool pprof ... to open the profile; type top to see the top allocations, or svg to generate a vector image you can view in your browser.

from coraza-caddy.

anamba avatar anamba commented on September 3, 2024

I believe I am running into this issue... on startup, Caddy consumes about 1.5g ram, but each successive caddy reload (when there are config changes to apply) appears to add on about another 1.5g. Since the machine in question has only 4g physical ram, this becomes a problem quickly.

The memory usage does appear to come back down somewhat about 3-5 minutes later, but unlike the graphs, above I am not seeing memory usage level off, it just seems to settle at a higher level each time (eventually getting either OOMed, or restarted by monit when :2019 becomes slow to respond).

The high memory consumption in general is also a problem... this server has 90 sites, and I would like to enable coraza on all of them eventually, but that is currently not possible with only 4G ram. (It is a dev server, I am trying to work out the optimal coraza config here before enabling coraza on the production equivalents.)

The behavior I have described here is based on a build using the PR branch mentioned above in #76 (comment).

from coraza-caddy.

skixmix avatar skixmix commented on September 3, 2024

Hi, do you have any update about this issue? Also, for v2, there is this one still pending #88

Thanks you,
Simone

from coraza-caddy.

jptosso avatar jptosso commented on September 3, 2024

Great news! @jcchavezs do you think we should enable this feature by default?

from coraza-caddy.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.