Comments (7)
Yes, I know the solution.
- You need to downgrade the kernel of your OS to 16
- Remove the database you have. Instead, install the Hadoop cluster.
- Connect GRR to Hadoop
It should fix the issue.
from grr.
Thanks for your report. This looks like a legit issue on the GRR client side, we'll look into it. Increasing this limit on the client side likely creates more problems on the server side, so changing the chunking logic or similar is probably the way forward.
from grr.
Thanks for your report. This looks like a legit issue on the GRR client side, we'll look into it. Increasing this limit on the client side likely creates more problems on the server side, so changing the chunking logic or similar is probably the way forward.
So I have tried to decrease the Chunk size to 2000000, which is less then the agent able to receive and the same issue occured:
CRITICAL:2022-06-01 10:20:24,761 fleetspeak_client:117] Fatal error occurred:
Traceback (most recent call last):
File "site-packages\grr_response_client\fleetspeak_client.py", line 111, in _RunInLoop
File "site-packages\grr_response_client\fleetspeak_client.py", line 209, in _SendOp
File "site-packages\grr_response_client\fleetspeak_client.py", line 176, in _SendMessages
File "site-packages\fleetspeak\client_connector\connector.py", line 144, in Send
File "site-packages\fleetspeak\client_connector\connector.py", line 154, in _SendImpl
ValueError: Serialized message too large, size must be at most 2097152, got 2579672
So it is definitely something to be fixed in GRR.
from grr.
Ok, so what happens here is pretty interesting. The issue, most definitely, happens on the client side and has nothing to do with how the server database is set up.
When working through Fleetspeak, the GRR client runs as a subprocess of the Fleetspeak client. They communicate through shared file descriptors. When a GRR client wants to send a message to its server, it sends a message to the Fleetspeak client on the same machine through the shared fd. Now, Fleetspeak client has a hard message size limit of 2mb:
https://github.com/google/fleetspeak/blob/93b2b9a40808306722875abbd5434af4634c6531/fleetspeak/src/client/channel/channel.go#L32
The issue happens because GRR tries to send a message that's bigger than 2mb. There's a dedicated check for this in the GRR client fleetspeak connector code (MAX_SIZE is set to 2Mb):
https://github.com/google/fleetspeak/blob/master/fleetspeak_python/fleetspeak/client_connector/connector.py#L151
GRR should be careful enough to chunk the messages. Not sure why chunking failed in this case - will investigate further.
@bprykhodchenko Could you please specify the exact flow arguments you used to reproduce the issue?
from grr.
I looked at the YaraProcessDump client action. It dumps the memory on disk and the sends back a data structure with information about all the processes:
https://github.com/google/grr/blob/master/grr/client/grr_response_client/client_actions/memory.py#L767
What this means: if the result
proto is larger than 2Mb in the serialized form, the client action will fail. If the machine has a lot of memory and a lot of processes, then growing over 2Mb is likely possible. We need to look into either:
- Chunking response, or
- Increasing the limit from 2Mb to a higher value. I have to check what's the motivation for the 2Mb limit is.
from grr.
Hello,
as for your question - I just run the YARA Memory Dump from UI, I do not use CLI to include specific command line arguments.
As for the solution,
- I was changing the chunk size to smaller then the client can "eat" (in flow parameters), but I was running into the same issue.
- Should I wait for a fixed version, OR
- Should I download the source code, change the MAX_SIZE in connect.py file to, say, 4 MB, and install the server from the source code?
from grr.
A few comments:
- The issue is related to how many processes you dump at the same time with a single flow. GRR client tries to send a data structure with memory regions map to the server and if this data structure is too big, you get the failure . One workaround option is to, for example, run 2 flows with process regexes, one matching processes with names from
a
tok
, and the other one matching processes with names froml
toz
. That will likely help. - The right fix is to make YaraProcessDump client actions chunk its output. I will look into this next week - unfortunately, can't provide an eta until I start working on it.
- Changing MAX_SIZE on the GRR side is only a part of the solution. 2Mb limit is also hardcoded on the Fleetspeak client side. Fleetspeak is written in Go and is shipped with GRR in binary form (see https://pypi.org/project/fleetspeak-client-bin/). You'd need to recompile the
fleetspeak-client
Go binary and replace thefleetspeak-client-bin
package (see) in order for the fix to work. It's not exactly straight-forward, but if you're feeling adventurous, you can try it. Point to the relevant place in the Fleetspeak code: https://github.com/google/fleetspeak/blob/master/fleetspeak/src/client/channel/channel.go#L48
from grr.
Related Issues (20)
- Feature Request: Enable client installer to accept command line arguments for assigning labels at installation time HOT 1
- GRR on single port with Docker deployment HOT 1
- ETA for next release (Ubuntu 20.04LTS or 22.04LTS) HOT 1
- Installation issue GRR HOT 2
- Error unpacking grr-server_3.4.6-7_amd64.deb HOT 1
- Installation of the Zeek/Linux using the docker pull Repo : Error encountered as i tried to install FleetSpeak Service on Windows Client(dbg_GRR_3.4.6.7_amd64.msi/GRR_3.4.6.7_amd64.msi HOT 1
- grr-server_3.4.6-7 on Ubuntu 22.04 - Unmet Dependency HOT 1
- [GUI] Upgrade fsevents package
- Fleetspeak_frontend error: 'NoneType' object has no attribute 'Listen'
- Non installed fleetspeak
- Unable to see clients in the GRR Dashboard HOT 5
- GRR can't send email using SMTP
- Server works - linux client works -- Windows client fails
- Questions about Copilot + Open Source Software Hierarchy
- No instructions on how to work with GRR installed using Docker Compose HOT 4
- Windows on ARM, Support? HOT 2
- Condition logic prevents some valid artifacts from being collected on Windows 10 HOT 1
- Elasticsearch output plugin - Errors on _bulk api post HOT 12
- GRR Client installation error - Permission Denied (PublicKey) HOT 1
- grr_api_client to Werkzeug 2.x
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from grr.