Comments (4)
MimirCacheRequestErrors The cache index-cache used by Mimir grafana-mimir/devops-prod is experiencing 7.79% errors for getmulti operation. The cache chunks-cache used by Mimir grafana-mimir is experiencing 97.13% errors for getmulti operation. The cache chunks-cache used by Mimir grafana-mimir is experiencing 66.55% errors for set operation.
Internally at Grafana, we use 750ms
for both the index cache and chunks cache (these are the highest traffic caches). I've been meaning to change the values in the OSS jsonnet and Helm chart.
Scaling the memcached cluster doesn't seem to resolve the timeouts. The timeouts are reduced if I increase the store-gateway's memcached client timeout, however memcached get/set latency seems to scale proportionally with this configuration, so adjusting this to something larger then 450ms seems unreasonable.
To me this seems to indicate that timeout is too short and so you're continuing to hit it at the p99 and will until you set it to something larger than however long the operations are taking.
I notice that you have TLS enabled for the cache connections. The default values are picked with plaintext connections in mind, assuming that creating a new connection is basically "free".
With TLS, you'll likely need to:
- Increase the connection timeouts to something like
1s
(this is what the Jsonnet does):memcached.connect_timeout: 1s
- Give Memcached more CPU (default is 0.5 cores)
- Give the store-gateways more CPU
from mimir.
Much appreciated for the response @56quarters!
I made the following adjustments based on your suggestions (starting at 12:15 in the graphs below):
blocks_storage:
bucket_store:
chunks_cache:
memcached:
timeout: 750ms
connect_timeout: 1s
index_cache:
memcached:
timeout: 750ms
connect_timeout: 1s
This seems to have helped a bit — more so for the index-cache. We don't currently have a CPU limit set on our chunks-cache — you can see from the graphs some pods use upward of 2-3 CPUs under heavy load. Still, the MimirCacheRequestErrors is triggering often, predominately for getmulti operations it seems. The getmulti p99 seems to be hitting an upperbound still, perhaps indicating the timeout needs to be increased further to facilitate the longer read i/o ops?
The cache chunks-cache used by Mimir grafana-mimir/devops-stg is experiencing 65.21% errors for getmulti operation.
In addition to the timeout configs, increasing store-gateway CPU resource request from 1 -> 2 (starting at 15:00 in the graphs below) doesn't seem to have much impact.
Logs from the store-gateway more frequently display the following with or without the additional CPU power:
caller=client.go:144 level=debug msg="failed to store item to cache because the async buffer is full" err="the async queue is full" size=25000
caller=client.go:129 level=debug msg="failed to store item to cache" key=subrange:prod/01HQEZC3CD5XD6NRYXKAV5B9ST/chunks/000001:19024000:19040000 sizeBytes=16000 err="read tcp 10.42.34.219:33812->10.42.34.214:11211: i/o timeout"
What are the possible implications of increasing max_async_buffer_size
, should we consider increasing this? My guess is that this would only exacerbate our Memcached timeout issues.
On occasion I do see MimirSchedulerQueriesStuck getting trigger, an issue I'm assuming we can easily solve by scaling up our querier replicas?
from mimir.
Hi @56quarters I closed this issue out on accident but I still haven't come to a resolution. Any chance you might be able to take a look at my inquiries in bold above? Thanks!
from mimir.
The getmulti p99 seems to be hitting an upperbound still, perhaps indicating the timeout needs to be increased further to facilitate the longer read i/o ops?
That'd be my guess. To troubleshoot this I'd keep adjusting the read timeout and connection timeout up until almost all requests succeed. Then we can adjust it back to something reasonable based on looking at how long requests and connections take at steady-state.
What are the possible implications of increasing
max_async_buffer_size
, should we consider increasing this? My guess is that this would only exacerbate our Memcached timeout issues.
I'd leave this alone for now because it's a symptom of things being slow. It shouldn't be required once we've got things working more reliably.
On occasion I do see MimirSchedulerQueriesStuck getting trigger, an issue I'm assuming we can easily solve by scaling up our querier replicas?
Another symptom of "things are slow". I'd leave this for now until we get caching sorted out.
from mimir.
Related Issues (20)
- Race-condition in integration tests `TestPlayWithGrafanaMimirTutorial`
- Compactor: Reduce memory consumption from large meta.json files
- Docs: Standardize format of admonition blocks HOT 3
- Ruler: MimirRulerTooManyFailedQueries alert due to user error HOT 5
- Lookup style for S3 bucket HOT 2
- per-tenant `max_total_query_length` not overriding global limit HOT 3
- Flakey unit test Test_ProxyEndpoint_LogSlowQueries
- Flakey integration test TestRulerEvaluationDelay HOT 1
- Some ingester error logs are incorrectly noted as "sampled"
- [Helm] Confusing error message when sending multiple `X-Scope-OrgID` headers
- Support podAnnotations for minio pods HOT 2
- bug: writing block: closing index writer: postings offset table size exceeds 4 bytes: 5078829125 HOT 4
- global deletion mark skipped clean when `{block}/deletion-mark.json` does't exists
- query-frontend: label names and values endpoints cache ignore `limit` parameter HOT 2
- "The specified bucket does not exist" but all buckets are present in minio HOT 9
- failed to query store-gateway
- Support rollout prepare-downscale in helm chart HOT 3
- Custom PromQL Query function to fetch mimir results HOT 1
- Cardinality API returns wrong number of in-memory series when using 5 zones and RF=3 HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from mimir.