sentry-kubernetes / charts Goto Github PK
View Code? Open in Web Editor NEWEasily deploy Sentry on your Kubernetes Cluster
License: MIT License
Easily deploy Sentry on your Kubernetes Cluster
License: MIT License
helm install sentry . --debug
install.go:158: [debug] Original chart version: ""
install.go:175: [debug] CHART PATH: /Users/arian/repos/charts/sentry
client.go:234: [debug] Starting delete for "sentry" ConfigMap
client.go:98: [debug] creating 1 resource(s)
client.go:234: [debug] Starting delete for "snuba" ConfigMap
client.go:98: [debug] creating 1 resource(s)
client.go:234: [debug] Starting delete for "snuba-db-init" Job
client.go:259: [debug] jobs.batch "snuba-db-init" not found
client.go:98: [debug] creating 1 resource(s)
client.go:439: [debug] Watching for changes to Job snuba-db-init with timeout of 5m0s
client.go:467: [debug] Add/Modify event for snuba-db-init: ADDED
client.go:506: [debug] snuba-db-init: Jobs active: 0, jobs failed: 0, jobs succeeded: 0
client.go:467: [debug] Add/Modify event for snuba-db-init: MODIFIED
client.go:506: [debug] snuba-db-init: Jobs active: 1, jobs failed: 0, jobs succeeded: 0
Error: failed pre-install: timed out waiting for the condition
helm.go:75: [debug] failed pre-install: timed out waiting for the condition
helm version
version.BuildInfo{Version:"v3.1.1", GitCommit:"afe70585407b420d0097d07b21c47dc511525ac8", GitTreeState:"clean", GoVersion:"go1.13.8"}
Would you consider adopting the incubator/sentry-kubernetes
chart into this repo?
Hi, Im not all to familiar with the deprecated chart, but im having some trouble installing this chart as a Helm dependency in my project. When doing a helm search sentry
after adding the repo from the README.md, the Chart version is 3.1.0 as of now. But when inserting it as a Chart dependency, doing a helm dep update
and looking at the values.yaml
in the downloaded sentry-3.1.0.tgz
, the file looks very different to the current release in this github repo. And it's not just missing comments, the whole values.yaml has a completely different structure. So now I'm not sure if the 3.1.0 Chart version is a old version or just a completely different Chart altogether. Even the Chart.yaml looks very different to the one in the latest 3.0.1 release in this repo.
I'm now quite confused as of how I could get the latest chart of this repo without downloading it from the releases page and inserting it manually. Maybe it's just confusion on my part, but I can imagine others might have the same issue.
Can anyone clear that up, please?
I'll undertake this in a local fork for the time being, but I really think this chart should offer a direct upgrade path if it is replacing the official stable repository.
In my case, we'd like to go from Sentry 9 -> Sentry 10, but at present, out of the box, this chart does not offer that. I'll be making updates to support this.
Updating this last statement: It's a helm3 chart. Deps exist. Egg on face!
I attempted to use latest tags for sentry and snuba:
getsentry/sentry:00ae12b89402f10158e36413d35a94d642b56082
getsentry/snuba:3b11c3b64901d91fbbd0622ca718bcb7d871dd4a
But once rolled out, sentry became unusable, with every page giving internal error.
snuba-api
, snuba-consumer
were logging this error:
clickhouse_driver.errors.NetworkError: Code: 210. Cannot assign requested address (localhost:9000)
I wonder if something changed with regards to how they expect clickhouse address to be provided? Since it shouldn't be using localhost
to connect to clickhouse.
@J0sh0nat0r you mentioned previously that you are running using latest tags for sentry? Did you do anything differently on your end to make it work?
This is using chart version 1.3.1
Hi there,
When redeploying the helm chart, with the same values file, I am experiencing the below:
upgrade.go:291: [debug] warning: Upgrade "sentry" failed: failed to replace object: Service "sentry-kafka" is invalid: spec.clusterIP: Invalid value: "": field is immutable && failed to replace object: Service "sentry-web" is invalid: spec.clusterIP: Invalid value: "": field is immutable && failed to replace object: Service "sentry-snuba" is invalid: spec.clusterIP: Invalid value: "": field is immutable
Error: UPGRADE FAILED: failed to replace object: Service "sentry-kafka" is invalid: spec.clusterIP: Invalid value: "": field is immutable && failed to replace object: Service "sentry-web" is invalid: spec.clusterIP: Invalid value: "": field is immutable && failed to replace object: Service "sentry-snuba" is invalid: spec.clusterIP: Invalid value: "": field is immutable
helm.go:75: [debug] failed to replace object: Service "sentry-kafka" is invalid: spec.clusterIP: Invalid value: "": field is immutable && failed to replace object: Service "sentry-web" is invalid: spec.clusterIP: Invalid value: "": field is immutable && failed to replace object: Service "sentry-snuba" is invalid: spec.clusterIP: Invalid value: "": field is immutable
This is how I am invoking helm
helm upgrade --install --debug \
$APP_NAME \
sentry/sentry \
--namespace=$APP_NAME \
--version 3.0.1 \
-f values.yaml
Any ideas?
When I did
โ ~ helm install sentry/sentry --generate-name
I've got
Error: Operation cannot be fulfilled on resourcequotas "iskiridomov": the object has been modified; please apply your changes to the latest version and try again
Any help?
There is a discrepancy between pvc and deployment templates:
claimName in deployment is set to sentry.fullname https://github.com/sentry-kubernetes/charts/blob/develop/sentry/templates/deployment-sentry-web.yaml#L106
but name of pvc in template is sentry-data https://github.com/sentry-kubernetes/charts/blob/develop/sentry/templates/pvc.yaml#L5
So I can't install with persistence enabled, or without creating a PVC manually.
Hi,
wondering if anyone encountered similar issue and might know what might be causing this.
Sentry receives event, sends email about event being created. But, if you follow the link to sentry, it says:
Sorry, the events for this issue could not be found.
Even though it shows:
Events: 1
but if you click on Events
tab, it is also empty.
It happens to some events only. At some point something changes, and then events are coming through and showing up just fine.
Nothing in logs shows any errors or anything that would explain this.
snuba consumer is working fine and no errors.
workers are working fine and not backlogged.
Queue is completely empty.
Only thing that is different from default chart, is that we don't use rabbitmq, instead using redis as broker. Could that cause such behavior?
Any help or ideas would be greatly appreciated.
Error: rpc error: code = Unknown desc = configmaps "sentry10-sentry" already exists
Please add ldap support
We should probably support GeoIP, potentially through a sidecar using this Docker image.
Hi,
How can I configure oAtuh2 with Gitlab?
So that users registered in GitLab can log in to Sentry.
I found the plugin https://github.com/SkyLothar/sentry-auth-gitlab
but I don't know how can I install it?
First great job with this repo.
However there is no way to add tolerations/nodeselector to the k8s jobs
Happy to submit a PR
Thanks,
Since sentry version v20.6.0 has been released, it would be nice to be
able to install v20.6.0 also on the cluster.
I'm almost certain a number of our dependencies already use the values we're passing through the sentry values.
The current root values file is 1159 lines, I'm certain there's a ton of fat we can trim through using the defaults from the dependency charts.
I assume for people coming to this chart, it's pretty overwhelming having all these options, 90% they probably don't need.,
Sentry worker cannot connect to rabbitmq, this cause that issues are not processed I guess .
11:36:23 [ERROR] celery.worker.consumer: consumer: Cannot connect to amqp://guest:**@sentry-rabbitmq:5672//: Error opening socket: hostname lookup failed.
Sometimes when you have source maps you can have some 502
Can you update the config map with this values ?
https://forum.sentry.io/t/sourcemap-upload-failing-when-file-size-more-than-20mb/4660/2
--- a/sentry.conf.py
+++ b/sentry.conf.py
@@ -247,6 +247,7 @@ SENTRY_WEB_HOST = '0.0.0.0'
SENTRY_WEB_PORT = 9000
SENTRY_WEB_OPTIONS = {
# 'workers': 3, # the number of web workers
+ 'limit-post': 1024 * 1024 * 1024 * 2
}
Hi there, the sentry chart is currently depending on the rabbitmq-ha chart in version 1.38.2.
With version 1.39.0 they added the option to specify affinity
on the statefulset, which would be a nice to have.
Since this is the only change in this version (helm/charts@ca058a2#diff-506629ccdea7dd5052345d0c0b8a7323), I think it should be safe to update to that version at least?
Thanks!
I can't see a good reason why this is in it's own root path in the values yaml.
It's only referenced in the Sentry configmap.
Line 147 in 264aa1d
Hello just upgraded to 1.16 kubernetes version and I got stuck with sentry helm
Error: unable to build kubernetes objects from current release manifest: unable to recognize "": no matches for kind "StatefulSet" in version "apps/v1beta2"
I guess statefulSet in apps/v1beta2 is deprecated and apps/v1 should be used instead, any chance of fix?
https://kubernetes.io/blog/2019/07/18/api-deprecations-in-1-16/
Made it pretty far, but getting stuck here. Just stays pending forever with no logs:
sentry-db-init-j56m6 0/1 Pending 0 4m19s
snuba-db-init-mkzgt 0/1 Completed 0 4m49s
snuba-migrate-4tm7d 0/1 Completed 0 4m37s
client.go:234: [debug] Starting delete for "sentry-db-init" Job
client.go:98: [debug] creating 1 resource(s)
client.go:439: [debug] Watching for changes to Job sentry-db-init with timeout of 20m0s
client.go:467: [debug] Add/Modify event for sentry-db-init: ADDED
client.go:506: [debug] sentry-db-init: Jobs active: 1, jobs failed: 0, jobs succeeded: 0
I'm running bare-metal on digital ocean, any ideas?
Should run:
When you install this, it has nearly no resource limits/requests and can easily take down some of your nodes. I tried ;)
It would be cool to have an example template with all the resource definitions of the main and subcharts. Guessing them is always so hard and maybe somebody has experience with some useful default recommended values.
I think a value like .Values.redis.cluster.enabled
could be useful
From the previous chart there was support for statsd/prometheus via
Is there any plans for this type of 'native' integration in this chart? a-la metrics.enabled=true
?
Slack Apps created after June 2020 somehow are behaving differently.
The docs were updated here: https://github.com/getsentry/develop/blob/master/src/docs/integrations/slack/index.mdx
But the helm template only creates this structure:
{{- if .Values.slack.clientId }}
slack.client-id: "{{ .Values.slack.clientId }}"
slack.client-secret: "{{ .Values.slack.clientSecret }}"
slack.verification-token: "{{ .Values.slack.verificationToken }}"
{{ end }}
Hi it fails when you leave defaults empty on this error.
File "/usr/local/lib/python2.7/site-packages/sentry/options/manager.py", line 270, in validate_option
raise TypeError("%r: got %r, expected %r" % (key, _type(value), opt.type))
TypeError: 'mail.username': got <type 'NoneType'>, expected string
Hey
I recently installed sentry on-premise but I'm experiencing weird errors when attempting to load projects/issue streams.
sentry-snuba-consumer and sentry-snuba-outcomes-consumer both crash and have the following logs:
Consumer:
2020-07-17 19:08:15,209 New partitions assigned: {Partition(topic=Topic(name='events'), index=0): 0, Partition(topic=Topic(name='events'), index=1): 2, Partition(topic=Topic(name='events'), index=2): 0, Partition(topic=Topic(name='events'), index=3): 0, Partition(topic=Topic(name='events'), index=4): 0, Partition(topic=Topic(name='events'), index=5): 0, Partition(topic=Topic(name='events'), index=6): 19, Partition(topic=Topic(name='events'), index=7): 0, Partition(topic=Topic(name='events'), index=8): 0, Partition(topic=Topic(name='events'), index=9): 0, Partition(topic=Topic(name='events'), index=10): 0, Partition(topic=Topic(name='events'), index=11): 69}
2020-07-17 19:08:16,316 Flushing 13 items (from {Partition(topic=Topic(name='events'), index=11): Offsets(lo=69, hi=70), Partition(topic=Topic(name='events'), index=6): Offsets(lo=19, hi=29)}): forced:False size:False time:True
2020-07-17 19:08:16,434 Worker flush took 118ms
2020-07-17 19:08:21,255 Flushing 1 items (from {Partition(topic=Topic(name='events'), index=11): Offsets(lo=71, hi=71)}): forced:False size:False time:True
Traceback (most recent call last):
File "/usr/local/bin/snuba", line 11, in <module>
load_entry_point('snuba', 'console_scripts', 'snuba')()
File "/usr/local/lib/python3.7/site-packages/click/core.py", line 722, in __call__
return self.main(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/click/core.py", line 697, in main
rv = self.invoke(ctx)
File "/usr/local/lib/python3.7/site-packages/click/core.py", line 1066, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/usr/local/lib/python3.7/site-packages/click/core.py", line 895, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/local/lib/python3.7/site-packages/click/core.py", line 535, in invoke
return callback(*args, **kwargs)
File "/usr/src/snuba/snuba/cli/consumer.py", line 162, in consumer
consumer.run()
File "/usr/src/snuba/snuba/utils/streams/batching.py", line 137, in run
self._run_once()
File "/usr/src/snuba/snuba/utils/streams/batching.py", line 142, in _run_once
self._flush()
File "/usr/src/snuba/snuba/utils/streams/batching.py", line 242, in _flush
self.worker.flush_batch(self.__batch_results)
File "/usr/src/snuba/snuba/consumer.py", line 109, in flush_batch
self.__writer.write(inserts)
File "/usr/src/snuba/snuba/clickhouse/http.py", line 84, in write
raise ClickhouseError(int(code), message)
snuba.clickhouse.errors.ClickhouseError: [60] Table default.sentry_dist doesn't exist. (version 19.16.19.85 (official build))
Outcomes Consumer:
2020-07-17 19:09:06,207 New partitions assigned: {Partition(topic=Topic(name='outcomes'), index=0): 21, Partition(topic=Topic(name='outcomes'), index=1): 11, Partition(topic=Topic(name='outcomes'), index=2): 1, Partition(topic=Topic(name='outcomes'), index=3): 9, Partition(topic=Topic(name='outcomes'), index=4): 2, Partition(topic=Topic(name='outcomes'), index=5): 7, Partition(topic=Topic(name='outcomes'), index=6): 7, Partition(topic=Topic(name='outcomes'), index=7): 32, Partition(topic=Topic(name='outcomes'), index=8): 5, Partition(topic=Topic(name='outcomes'), index=9): 10, Partition(topic=Topic(name='outcomes'), index=10): 11, Partition(topic=Topic(name='outcomes'), index=11): 10}
2020-07-17 19:09:09,779 Flushing 1 items (from {Partition(topic=Topic(name='outcomes'), index=8): Offsets(lo=5, hi=5)}): forced:False size:False time:True
Traceback (most recent call last):
File "/usr/local/bin/snuba", line 11, in <module>
load_entry_point('snuba', 'console_scripts', 'snuba')()
File "/usr/local/lib/python3.7/site-packages/click/core.py", line 722, in __call__
return self.main(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/click/core.py", line 697, in main
rv = self.invoke(ctx)
File "/usr/local/lib/python3.7/site-packages/click/core.py", line 1066, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/usr/local/lib/python3.7/site-packages/click/core.py", line 895, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/local/lib/python3.7/site-packages/click/core.py", line 535, in invoke
return callback(*args, **kwargs)
File "/usr/src/snuba/snuba/cli/consumer.py", line 162, in consumer
consumer.run()
File "/usr/src/snuba/snuba/utils/streams/batching.py", line 137, in run
self._run_once()
File "/usr/src/snuba/snuba/utils/streams/batching.py", line 142, in _run_once
self._flush()
File "/usr/src/snuba/snuba/utils/streams/batching.py", line 242, in _flush
self.worker.flush_batch(self.__batch_results)
File "/usr/src/snuba/snuba/consumer.py", line 109, in flush_batch
self.__writer.write(inserts)
File "/usr/src/snuba/snuba/clickhouse/http.py", line 84, in write
raise ClickhouseError(int(code), message)
snuba.clickhouse.errors.ClickhouseError: [60] Table default.outcomes_raw_dist doesn't exist. (version 19.16.19.85 (official build))
I have custom values on the template, they are
system:
url: "xxxxxx"
adminEmail: "xxxxxxx"
secretKey: 'xxxxxxxx'
public: false # This should only be used if youโre installing Sentry behind your companyโs firewall.
# Auth disabled because it'll be using Azure.
auth:
register: false
# Don't want a user creating by default
user:
create: false
## -- MICROSERVICE CONFIGURATION --
kafka:
enabled: false
## This value is only used when kafka.enabled is set to false
externalKafka:
host: "kafka-0.kafka-headless.databases.svc.cluster.local"
port: 9092
redis:
enabled: false
## This value is only used when redis.enabled is set to false
externalRedis:
host: "redis-master.databases.svc.cluster.local"
database: 4
prefix: 'turret'
port: 6379
postgresql:
enabled: false
## This value is only used when postgresql.enabled is set to false
externalPostgresql:
host: "xxxxxxxx"
port: 0000
username: "xxxx"
password: "xxxxx"
database: xxxxx
# sslMode: require
rabbitmq:
## If disabled, Redis will be used instead as the broker.
enabled: false
filestore:
# Set to one of filesystem, gcs or s3 as supported by Sentry.
backend: s3
s3:
accessKey: 'xxx'
secretKey: 'xxxx'
bucketName: xxx
endpointUrl: 'xxxx'
# signature_version:
region_name: 'xxxxx'
# default_acl:
I can provide any extra needed info.
Thanks,
Seems to be the same issue as #14 but I am still experiencing it with 0.11.1.
db-init log:
13:49:11 [WARNING] sentry.utils.geo: settings.GEOIP_PATH_MMDB not configured.
13:49:16 [INFO] sentry.plugins.github: apps-not-configured
Traceback (most recent call last):
File "/usr/local/bin/sentry", line 8, in <module>
sys.exit(main())
File "/usr/local/lib/python2.7/site-packages/sentry/runner/__init__.py", line 166, in main
cli(prog_name=get_prog(), obj={}, max_content_width=100)
File "/usr/local/lib/python2.7/site-packages/click/core.py", line 722, in __call__
return self.main(*args, **kwargs)
File "/usr/local/lib/python2.7/site-packages/click/core.py", line 697, in main
rv = self.invoke(ctx)
File "/usr/local/lib/python2.7/site-packages/click/core.py", line 1066, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/usr/local/lib/python2.7/site-packages/click/core.py", line 895, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/local/lib/python2.7/site-packages/click/core.py", line 535, in invoke
return callback(*args, **kwargs)
File "/usr/local/lib/python2.7/site-packages/click/decorators.py", line 17, in new_func
return f(get_current_context(), *args, **kwargs)
File "/usr/local/lib/python2.7/site-packages/sentry/runner/decorators.py", line 30, in inner
return ctx.invoke(f, *args, **kwargs)
File "/usr/local/lib/python2.7/site-packages/click/core.py", line 535, in invoke
return callback(*args, **kwargs)
File "/usr/local/lib/python2.7/site-packages/click/decorators.py", line 17, in new_func
return f(get_current_context(), *args, **kwargs)
File "/usr/local/lib/python2.7/site-packages/sentry/runner/commands/upgrade.py", line 168, in upgrade
_upgrade(not noinput, traceback, verbosity, not no_repair)
File "/usr/local/lib/python2.7/site-packages/sentry/runner/commands/upgrade.py", line 121, in _upgrade
_migrate_from_south(verbosity)
File "/usr/local/lib/python2.7/site-packages/sentry/runner/commands/upgrade.py", line 93, in _migrate_from_south
if not _has_south_history(connection):
File "/usr/local/lib/python2.7/site-packages/sentry/runner/commands/upgrade.py", line 78, in _has_south_history
cursor = connection.cursor()
File "/usr/local/lib/python2.7/site-packages/django/db/backends/base/base.py", line 254, in cursor
return self._cursor()
File "/usr/local/lib/python2.7/site-packages/sentry/db/postgres/decorators.py", line 44, in inner
return func(self, *args, **kwargs)
File "/usr/local/lib/python2.7/site-packages/sentry/db/postgres/base.py", line 99, in _cursor
cursor = super(DatabaseWrapper, self)._cursor()
File "/usr/local/lib/python2.7/site-packages/django/db/backends/base/base.py", line 229, in _cursor
self.ensure_connection()
File "/usr/local/lib/python2.7/site-packages/django/db/backends/base/base.py", line 213, in ensure_connection
self.connect()
File "/usr/local/lib/python2.7/site-packages/django/db/utils.py", line 94, in __exit__
six.reraise(dj_exc_type, dj_exc_value, traceback)
File "/usr/local/lib/python2.7/site-packages/django/db/backends/base/base.py", line 213, in ensure_connection
self.connect()
File "/usr/local/lib/python2.7/site-packages/django/db/backends/base/base.py", line 189, in connect
self.connection = self.get_new_connection(conn_params)
File "/usr/local/lib/python2.7/site-packages/django/db/backends/postgresql/base.py", line 176, in get_new_connection
connection = Database.connect(**conn_params)
File "/usr/local/lib/python2.7/site-packages/psycopg2/__init__.py", line 126, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
django.db.utils.OperationalError: fe_sendauth: no password supplied
Using these values:
prefix:
user:
create: true
email: [email protected]
password: aaaa
images:
sentry:
repository: getsentry/sentry
tag: b15b1512a59051c84775e5eb7186a4da505e0ac4
pullPolicy: IfNotPresent
# imagePullSecrets: []
snuba:
repository: getsentry/snuba
tag: 9d678c2551045a696e5701d845a83c77dc528bd0
pullPolicy: IfNotPresent
# imagePullSecrets: []
sentry:
web:
replicas: 1
env: {}
probeInitialDelaySeconds: 10
resources: {}
affinity: {}
nodeSelector: {}
# tolerations: []
# podLabels: []
autoscaling:
enabled: false
minReplicas: 2
maxReplicas: 5
targetCPUUtilizationPercentage: 50
worker:
replicas: 3
# concurrency: 4
env: {}
resources: {}
affinity: {}
nodeSelector: {}
# tolerations: []
# podLabels: []
cron:
env: {}
resources: {}
affinity: {}
nodeSelector: {}
# tolerations: []
# podLabels: []
postProcessForward:
replicas: 1
env: {}
resources: {}
affinity: {}
nodeSelector: {}
# tolerations: []
# podLabels: []
snuba:
api:
replicas: 1
env: {}
probeInitialDelaySeconds: 10
resources: {}
affinity: {}
nodeSelector: {}
# tolerations: []
# podLabels: []
autoscaling:
enabled: false
minReplicas: 2
maxReplicas: 5
targetCPUUtilizationPercentage: 50
consumer:
replicas: 1
env: {}
resources: {}
affinity: {}
nodeSelector: {}
# tolerations: []
# podLabels: []
replacer:
env: {}
resources: {}
affinity: {}
nodeSelector: {}
# tolerations: []
# podLabels: []
hooks:
enabled: true
dbInit:
resources:
limits:
memory: 2048Mi
requests:
cpu: 300m
memory: 2048Mi
snubaInit:
resources:
limits:
cpu: 2000m
memory: 1Gi
requests:
cpu: 700m
memory: 1Gi
system:
url: ""
adminEmail: ""
secretKey: 'icLq77rCyY_qrMMpXa6TQNjkDV6mU!c'
public: false # This should only be used if youโre installing Sentry behind your companyโs firewall.
mail:
backend: dummy # smtp
useTls: false
username: ""
password: ""
port: 25
host: ""
from: ""
symbolicator:
enabled: false
auth:
register: false
service:
name: sentry
type: ClusterIP
externalPort: 9000
annotations: {}
# externalIPs:
# - 192.168.0.1
# loadBalancerSourceRanges: []
github: {} # https://github.com/settings/apps (Create a Github App)
# github:
# appId: "xxxx"
# appName: MyAppName
# clientId: "xxxxx"
# clientSecret: "xxxxx"
# privateKey: "-----BEGIN RSA PRIVATE KEY-----\nMIIEpA" !!!! Don't forget a trailing \n
# webhookSecret: "xxxxx`"
githubSso: {} # https://github.com/settings/developers (Create a OAuth App)
# clientId: "xx"
# clientSecret: "xx"
slack: {}
# slack:
# clientId:
# clientSecret:
# verificationToken:
ingress:
enabled: true
annotations:
cert-manager.io/cluster-issuer: letsencrypt-production
kubernetes.io/tls-acme: "true"
hostname: sentry.test.dev
tls:
- secretName: sentry-certs
hosts:
- sentry.test.dev
filestore:
# Set to one of filesystem, gcs or s3 as supported by Sentry.
backend: filesystem
filesystem:
path: /var/lib/sentry/files
## Enable persistence using Persistent Volume Claims
## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
##
persistence:
enabled: true
## database data Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
# storageClass: "-"
accessMode: ReadWriteOnce
size: 10Gi
## Whether to mount the persistent volume to the Sentry worker and
## cron deployments. This setting needs to be enabled for some advanced
## Sentry features, such as private source maps. If you disable this
## setting, the Sentry workers will not have access to artifacts you upload
## through the web deployment.
## Please note that you may need to change your accessMode to ReadWriteMany
## if you plan on having the web, worker and cron deployments run on
## different nodes.
persistentWorkers: false
## Point this at a pre-configured secret containing a service account. The resulting
## secret will be mounted at /var/run/secrets/google
gcs:
# credentialsFile: credentials.json
# secretName:
# bucketName:
## Currently unconfigured and changing this has no impact on the template configuration.
s3: {}
# accessKey:
# secretKey:
# bucketName:
# endpointUrl:
# signature_version:
# region_name:
# default_acl:
config:
configYml: |
# No YAML Extension Config Given
sentryConfPy: |
# No Python Extension Config Given
clickhouse:
enabled: true
clickhouse:
imageVersion: "19.16"
kafka:
enabled: true
replicaCount: 3
allowPlaintextListener: true
defaultReplicationFactor: 3
offsetsTopicReplicationFactor: 3
transactionStateLogReplicationFactor: 3
transactionStateLogMinIsr: 3
service:
port: 9092
redis:
## Required if the Redis component of this chart is disabled. (Existing Redis)
#
hostOverride: ""
enabled: true
nameOverride: sentry-redis
usePassword: false
# Only used when internal redis is disabled
# host: redis
# Just omit the password field if your redis cluster doesn't use password
# password: redis
# port: 6379
master:
persistence:
enabled: true
postgresql:
## Required if the Postgresql component of this chart is disabled. (Existing Postgres)
#
hostOverride: ""
enabled: true
nameOverride: sentry-postgresql
postgresqlUsername: postgres
postgresqlDatabase: sentry
# Only used when internal PG is disabled
# postgresqlHost: postgres
# postgresqlPassword: postgres
# postgresqlPort: 5432
# postgresSslMode: require
replication:
enabled: false
slaveReplicas: 2
synchronousCommit: "on"
numSynchronousReplicas: 1
rabbitmq:
## If disabled, Redis will be used instead as the broker.
enabled: true
forceBoot: true
replicaCount: 3
rabbitmqErlangCookie: pHgpy3Q6adTskzAT6bLHCFqFTF7lMxhA
rabbitmqUsername: guest
rabbitmqPassword: guest
nameOverride: ""
podDisruptionBudget:
minAvailable: 1
persistentVolume:
enabled: true
resources: {}
# rabbitmqMemoryHighWatermark: 600MB
# rabbitmqMemoryHighWatermarkType: absolute
definitions:
policies: |-
{
"name": "ha-all",
"pattern": "^((?!celeryev.*).)*$",
"vhost": "/",
"definition": {
"ha-mode": "all",
"ha-sync-mode": "automatic",
"ha-sync-batch-size": 1
}
}
Sentry configmap:
apiVersion: v1
data:
config.yml: |-
system.secret-key: icLq77rCyY_qrMMpXa6TQNjkDV6mU!c
symbolicator.enabled: false
symbolicator.options:
url: "http://sentry-symbolicator:3021"
mail.backend: "dummy"
mail.use-tls: false
mail.username: ""
mail.password: ""
mail.port: 25
mail.host: ""
mail.from: ""
################
# Redis #
################
redis.clusters:
default:
hosts:
0:
host: "sentry-sentry-redis-master"
port: 6379
################
# File storage #
################
# Uploaded media uses these `filestore` settings. The available
# backends are either `filesystem` or `s3`.
filestore.backend: 'filesystem'
filestore.options:
location: '/var/lib/sentry/files'
# No YAML Extension Config Given
sentry.conf.py: "from sentry.conf.server import * # NOQA\n\nDATABASES = {\n \"default\":
{\n \"ENGINE\": \"sentry.db.postgres\",\n \"NAME\": \"sentry\",\n
\ \"USER\": \"postgres\",\n \"PASSWORD\": os.environ.get(\"POSTGRES_PASSWORD\",
\"\"),\n \"HOST\": \"sentry-sentry-postgresql\",\n \"PORT\": \"5432\",\n
\ }\n}\n\n# You should not change this setting after your database has been
created\n# unless you have altered all schemas first\nSENTRY_USE_BIG_INTS = True\n\n###########\n#
General #\n###########\n\n# Instruct Sentry that this install intends to be run
by a single organization\n# and thus various UI optimizations should be enabled.\nSENTRY_SINGLE_ORGANIZATION
= True\n\nSENTRY_OPTIONS[\"system.event-retention-days\"] = int(env('SENTRY_EVENT_RETENTION_DAYS')
or 90)\n\n#########\n# Queue #\n#########\n\n# See https://docs.getsentry.com/on-premise/server/queue/
for more\n# information on configuring your queue broker and workers. Sentry relies\n#
on a Python framework called Celery to manage queues.\nBROKER_URL = os.environ.get(\"BROKER_URL\",
\"amqp://guest:guest@sentry-rabbitmq:5672//\")\n \n#########\n# Cache #\n#########\n\n#
Sentry currently utilizes two separate mechanisms. While CACHES is not a\n# requirement,
it will optimize several high throughput patterns.\n\n# CACHES = {\n# \"default\":
{\n# \"BACKEND\": \"django.core.cache.backends.memcached.MemcachedCache\",\n#
\ \"LOCATION\": [\"memcached:11211\"],\n# \"TIMEOUT\": 3600,\n#
\ }\n# }\n\n# A primary cache is required for things such as processing events\nSENTRY_CACHE
= \"sentry.cache.redis.RedisCache\"\n\nDEFAULT_KAFKA_OPTIONS = {\n \"bootstrap.servers\":
\"sentry-kafka:9092\",\n \"message.max.bytes\": 50000000,\n \"socket.timeout.ms\":
1000,\n}\n\nSENTRY_EVENTSTREAM = \"sentry.eventstream.kafka.KafkaEventStream\"\nSENTRY_EVENTSTREAM_OPTIONS
= {\"producer_configuration\": DEFAULT_KAFKA_OPTIONS}\n\nKAFKA_CLUSTERS[\"default\"]
= DEFAULT_KAFKA_OPTIONS\n\n###############\n# Rate Limits #\n###############\n\n#
Rate limits apply to notification handlers and are enforced per-project\n# automatically.\n\nSENTRY_RATELIMITER
= \"sentry.ratelimits.redis.RedisRateLimiter\"\n\n##################\n# Update
Buffers #\n##################\n\n# Buffers (combined with queueing) act as an
intermediate layer between the\n# database and the storage API. They will greatly
improve efficiency on large\n# numbers of the same events being sent to the API
in a short amount of time.\n# (read: if you send any kind of real data to Sentry,
you should enable buffers)\n\nSENTRY_BUFFER = \"sentry.buffer.redis.RedisBuffer\"\n\n##########\n#
Quotas #\n##########\n\n# Quotas allow you to rate limit individual projects or
the Sentry install as\n# a whole.\n\nSENTRY_QUOTAS = \"sentry.quotas.redis.RedisQuota\"\n\n########\n#
TSDB #\n########\n\n# The TSDB is used for building charts as well as making things
like per-rate\n# alerts possible.\n\nSENTRY_TSDB = \"sentry.tsdb.redissnuba.RedisSnubaTSDB\"\n\n#########\n#
SNUBA #\n#########\n\nSENTRY_SEARCH = \"sentry.search.snuba.EventsDatasetSnubaSearchBackend\"\nSENTRY_SEARCH_OPTIONS
= {}\nSENTRY_TAGSTORE_OPTIONS = {}\n\n###########\n# Digests #\n###########\n\n#
The digest backend powers notification summaries.\n\nSENTRY_DIGESTS = \"sentry.digests.backends.redis.RedisBackend\"\n\n##############\n#
Web Server #\n##############\n\nSENTRY_WEB_HOST = \"0.0.0.0\"\nSENTRY_WEB_PORT
= 9000\nSENTRY_PUBLIC = False\nSENTRY_WEB_OPTIONS = {\n \"http\": \"%s:%s\"
% (SENTRY_WEB_HOST, SENTRY_WEB_PORT),\n \"protocol\": \"uwsgi\",\n # This
is needed to prevent https://git.io/fj7Lw\n \"uwsgi-socket\": None,\n \"http-keepalive\":
True,\n \"memory-report\": False,\n # 'workers': 3, # the number of web
workers\n}\n\n###########\n# SSL/TLS #\n###########\n\n# If you're using a reverse
SSL proxy, you should enable the X-Forwarded-Proto\n# header and enable the settings
below\n\n# SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')\n# SESSION_COOKIE_SECURE
= True\n# CSRF_COOKIE_SECURE = True\n# SOCIAL_AUTH_REDIRECT_IS_HTTPS = True\n\n#
End of SSL/TLS settings\n\n############\n# Features #\n############\n\n\nSENTRY_FEATURES
= {\n \"auth:register\": True\n}\nSENTRY_FEATURES[\"projects:sample-events\"]
= False\nSENTRY_FEATURES.update(\n {\n feature: True\n for feature
in (\n \"organizations:discover\",\n \"organizations:events\",\n
\ \"organizations:global-views\",\n \"organizations:integrations-issue-basic\",\n
\ \"organizations:integrations-issue-sync\",\n \"organizations:invite-members\",\n
\ \"organizations:new-issue-ui\",\n \"organizations:repos\",\n
\ \"organizations:require-2fa\",\n \"organizations:sentry10\",\n
\ \"organizations:sso-basic\",\n \"organizations:sso-rippling\",\n
\ \"organizations:sso-saml2\",\n \"organizations:suggested-commits\",\n
\ \"projects:custom-inbound-filters\",\n \"projects:data-forwarding\",\n
\ \"projects:discard-groups\",\n \"projects:plugins\",\n
\ \"projects:rate-limits\",\n \"projects:servicehooks\",\n
\ )\n }\n)\n\n\n######################\n# GitHub Integration #\n#####################\n\nGITHUB_APP_ID
= ''\nGITHUB_API_SECRET = ''\n\n#########################\n# Bitbucket Integration
#\n########################\n\n# BITBUCKET_CONSUMER_KEY = 'YOUR_BITBUCKET_CONSUMER_KEY'\n#
BITBUCKET_CONSUMER_SECRET = 'YOUR_BITBUCKET_CONSUMER_SECRET'\n\n# No Python Extension
Config Given"
kind: ConfigMap
metadata:
creationTimestamp: "2020-04-14T13:48:03Z"
labels:
app: sentry
chart: sentry-0.11.1
heritage: Helm
release: sentry
name: sentry-sentry
namespace: sentry
resourceVersion: "48576753"
selfLink: /api/v1/namespaces/sentry/configmaps/sentry-sentry
uid: 87cd326e-464e-47a2-8de0-eb97092dcda6
Creating a new project, DSN is not generated. The DSN (both DSN and the deprecated) fields are blank.
And clicking generate new key doesn't do it either.
Anyone else run into this issue?
helm install sentry
Error: YAML parse error on sentry/templates/configmap-snuba.yaml: error converting YAML to JSON: yaml: line 14: did not find expected key
Hello,
I'm trying to install sentry with the helm chart, and having issue with it.
'db-init' job always failed.
Here is the commands that I used
> helm repo add sentry https://sentry-kubernetes.github.io/charts
> helm repo update
// installing sentry with default values.
> helm install test --wait sentry/sentry
When I enter 'kubectl get pods' I got few failed pod created by db-init job.
> kubectl get pods
...
sentry-test-db-init-bbdrl 0/1 Error 0 17s
sentry-test-db-init-f5szm 0/1 Error 0 27s
sentry-test-db-init-hvtqq 0/1 Error 0 34s
Here is the full log of these pods, it saids password authentication failure.
2020-07-05T07:24:33.699691445Z 07:24:33 [WARNING] sentry.utils.geo: settings.GEOIP_PATH_MMDB not configured.
2020-07-05T07:24:37.096954703Z 07:24:37 [INFO] sentry.plugins.github: apps-not-configured
2020-07-05T07:24:37.44814423Z Traceback (most recent call last):
2020-07-05T07:24:37.448184276Z File "/usr/local/bin/sentry", line 8, in <module>
2020-07-05T07:24:37.448190958Z sys.exit(main())
2020-07-05T07:24:37.448195843Z File "/usr/local/lib/python2.7/site-packages/sentry/runner/__init__.py", line 166, in main
2020-07-05T07:24:37.44820079Z cli(prog_name=get_prog(), obj={}, max_content_width=100)
2020-07-05T07:24:37.448205407Z File "/usr/local/lib/python2.7/site-packages/click/core.py", line 722, in __call__
2020-07-05T07:24:37.44825461Z return self.main(*args, **kwargs)
2020-07-05T07:24:37.448325686Z File "/usr/local/lib/python2.7/site-packages/click/core.py", line 697, in main
2020-07-05T07:24:37.448336165Z rv = self.invoke(ctx)
2020-07-05T07:24:37.4483415Z File "/usr/local/lib/python2.7/site-packages/click/core.py", line 1066, in invoke
2020-07-05T07:24:37.448575913Z return _process_result(sub_ctx.command.invoke(sub_ctx))
2020-07-05T07:24:37.448593774Z File "/usr/local/lib/python2.7/site-packages/click/core.py", line 895, in invoke
2020-07-05T07:24:37.448725785Z return ctx.invoke(self.callback, **ctx.params)
2020-07-05T07:24:37.448741583Z File "/usr/local/lib/python2.7/site-packages/click/core.py", line 535, in invoke
2020-07-05T07:24:37.448800729Z return callback(*args, **kwargs)
2020-07-05T07:24:37.448815546Z File "/usr/local/lib/python2.7/site-packages/click/decorators.py", line 17, in new_func
2020-07-05T07:24:37.448821339Z return f(get_current_context(), *args, **kwargs)
2020-07-05T07:24:37.448825948Z File "/usr/local/lib/python2.7/site-packages/sentry/runner/decorators.py", line 30, in inner
2020-07-05T07:24:37.448887917Z return ctx.invoke(f, *args, **kwargs)
2020-07-05T07:24:37.448898106Z File "/usr/local/lib/python2.7/site-packages/click/core.py", line 535, in invoke
2020-07-05T07:24:37.44891772Z return callback(*args, **kwargs)
2020-07-05T07:24:37.44892334Z File "/usr/local/lib/python2.7/site-packages/click/decorators.py", line 17, in new_func
2020-07-05T07:24:37.448940672Z return f(get_current_context(), *args, **kwargs)
2020-07-05T07:24:37.44894626Z File "/usr/local/lib/python2.7/site-packages/sentry/runner/commands/upgrade.py", line 174, in upgrade
2020-07-05T07:24:37.449034581Z _upgrade(not noinput, traceback, verbosity, not no_repair, with_nodestore)
2020-07-05T07:24:37.449047054Z File "/usr/local/lib/python2.7/site-packages/sentry/runner/commands/upgrade.py", line 121, in _upgrade
2020-07-05T07:24:37.449053366Z _migrate_from_south(verbosity)
2020-07-05T07:24:37.449068934Z File "/usr/local/lib/python2.7/site-packages/sentry/runner/commands/upgrade.py", line 93, in _migrate_from_south
2020-07-05T07:24:37.449096207Z if not _has_south_history(connection):
2020-07-05T07:24:37.449104958Z File "/usr/local/lib/python2.7/site-packages/sentry/runner/commands/upgrade.py", line 78, in _has_south_history
2020-07-05T07:24:37.449110233Z cursor = connection.cursor()
2020-07-05T07:24:37.449114757Z File "/usr/local/lib/python2.7/site-packages/django/db/backends/base/base.py", line 254, in cursor
2020-07-05T07:24:37.449188661Z return self._cursor()
2020-07-05T07:24:37.449199228Z File "/usr/local/lib/python2.7/site-packages/sentry/db/postgres/decorators.py", line 44, in inner
2020-07-05T07:24:37.449205047Z return func(self, *args, **kwargs)
2020-07-05T07:24:37.449210168Z File "/usr/local/lib/python2.7/site-packages/sentry/db/postgres/base.py", line 97, in _cursor
2020-07-05T07:24:37.449257896Z return super(DatabaseWrapper, self)._cursor()
2020-07-05T07:24:37.449267889Z File "/usr/local/lib/python2.7/site-packages/django/db/backends/base/base.py", line 229, in _cursor
2020-07-05T07:24:37.449317058Z self.ensure_connection()
2020-07-05T07:24:37.449326793Z File "/usr/local/lib/python2.7/site-packages/django/db/backends/base/base.py", line 213, in ensure_connection
2020-07-05T07:24:37.44936076Z self.connect()
2020-07-05T07:24:37.449368767Z File "/usr/local/lib/python2.7/site-packages/django/db/utils.py", line 94, in __exit__
2020-07-05T07:24:37.449403008Z six.reraise(dj_exc_type, dj_exc_value, traceback)
2020-07-05T07:24:37.449413707Z File "/usr/local/lib/python2.7/site-packages/django/db/backends/base/base.py", line 213, in ensure_connection
2020-07-05T07:24:37.449441583Z self.connect()
2020-07-05T07:24:37.44945057Z File "/usr/local/lib/python2.7/site-packages/django/db/backends/base/base.py", line 189, in connect
2020-07-05T07:24:37.449481718Z self.connection = self.get_new_connection(conn_params)
2020-07-05T07:24:37.449489393Z File "/usr/local/lib/python2.7/site-packages/django/db/backends/postgresql/base.py", line 176, in get_new_connection
2020-07-05T07:24:37.449537086Z connection = Database.connect(**conn_params)
2020-07-05T07:24:37.449544749Z File "/usr/local/lib/python2.7/site-packages/psycopg2/__init__.py", line 127, in connect
2020-07-05T07:24:37.449556407Z conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
2020-07-05T07:24:37.449650157Z django.db.utils.OperationalError: FATAL: password authentication failed for user "postgres"
Unable to install because following URL throws 404
Why do we have a PVC for the sentry-web component? This is bad because:
a. Replicas can't be spawned
b. Deletion of these resources can be cumbersome for backend storage provisioners.
Just want to understand the reasoning behind this?
I'm not sure if the web component actually has any need for persistence? But I don't know the deeper architectural design of the web component. My impression was that there were no persistent files written to disk.
So I appear to have a problem in a cluster I'm working with where the readiness probes are failing and the mq components keep restarting (This is my issue to deal with).
However, the workers appear to struggle reconnecting as a consumer to the MQ server even though RabbitMQ seems to be OK. This MAY be because the service doesn't see the pod as "ready" because the readiness probe isn't finishing.
This may be an entirely localised issue, but I'm jotting it down here just insane.
sentry-snuba-consumer-6d8ccf79ff-lcdjj 0/1 CrashLoopBackOff 8 27m
version:
release 0.12.3
snuba:
repository: getsentry/snuba
tag: 477561d1d5ae17ed9b88ed67cf208d0029dd07b4
pullPolicy: IfNotPresent
clickhouse:
enabled: true
clickhouse:
imageVersion: "19.16"
error log:
2020-04-23 14:11:42,904 New partitions assigned: {Partition(topic=Topic(name='events'), index=0): 538}
2020-04-23 14:11:43,912 Flushing 8 items (from {Partition(topic=Topic(name='events'), index=0): Offsets(lo=538, hi=545)}): forced:False size:False time:True
2020-04-23 14:11:43,933 Worker flush took 20ms
2020-04-23 14:12:14,119 Flushing 1 items (from {Partition(topic=Topic(name='events'), index=0): Offsets(lo=546, hi=546)}): forced:False size:False time:True
Traceback (most recent call last):
File "/usr/local/bin/snuba", line 11, in <module>
load_entry_point('snuba', 'console_scripts', 'snuba')()
File "/usr/local/lib/python3.7/site-packages/click/core.py", line 722, in __call__
return self.main(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/click/core.py", line 697, in main
rv = self.invoke(ctx)
File "/usr/local/lib/python3.7/site-packages/click/core.py", line 1066, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/usr/local/lib/python3.7/site-packages/click/core.py", line 895, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/local/lib/python3.7/site-packages/click/core.py", line 535, in invoke
return callback(*args, **kwargs)
File "/usr/src/snuba/snuba/cli/consumer.py", line 178, in consumer
consumer.run()
File "/usr/src/snuba/snuba/utils/streams/batching.py", line 137, in run
self._run_once()
File "/usr/src/snuba/snuba/utils/streams/batching.py", line 142, in _run_once
self._flush()
File "/usr/src/snuba/snuba/utils/streams/batching.py", line 242, in _flush
self.worker.flush_batch(self.__batch_results)
File "/usr/src/snuba/snuba/consumer.py", line 101, in flush_batch
self.__writer.write(inserts)
File "/usr/src/snuba/snuba/clickhouse/http.py", line 83, in write
raise ClickhouseError(int(code), message)
snuba.clickhouse.errors.ClickhouseError: [60] Table default.sentry_local doesn't exist. (version 19.16.17.1 (official build))
I am getting the following error while trying to enable tabix ingress using values.yaml file.
Error: UPGRADE FAILED: error validating "": error validating data: ValidationError(Ingress.spec.rules): invalid type for io.k8s.api.extensions.v1beta1.IngressSpec.rules: got "map", expected "array"
The template file https://github.com/sentry-kubernetes/charts/blob/develop/clickhouse/templates/ingress-clickhouse.yaml
seems to be the issue. spec/rules should be array instead of a map.
It would be nice to have this options in order to use an existing volume
filestore:
filesystem:
persistence:
enabled: true
existingClaim: "sentry"
https://github.com/sentry-kubernetes/charts/blob/3.0.0/sentry/templates/hpa-web.yaml#L5
https://github.com/sentry-kubernetes/charts/blob/3.0.0/sentry/templates/hpa-worker.yaml#L5
they both use name: {{ template "sentry.fullname" . }}
, it's better to add postfixes like -web
and -worker
to be able to enable both
Chart version 3.0.0 uses clickhouse 1.2.0 which has issue with indentiation of tolerations.
Please, update it to latest to be able to use tolerations
if filestore
is configured with PVC with access mode ReadWriteOnce
then during update sentry-web will be stuck in pending state, as by default it will attempt to perform rolling update. It will create new instance of sentry-web, while old one is running, and new instance will never get to running state, as it will not be able to mount PVC while old instance is still running.
Fix for this is simple:
Add following to deployment-sentry-web.yaml
:
strategy:
type: Recreate
That will enforce shut down of old sentry-web before creating a newer instance of sentry-web.
Possibly might be worth adding it only if filestore.filesystem.persistence.accessMode
is set to ReadWriteOnce
.
I'm not able to upgrade the chart.
Error: UPGRADE FAILED: failed to replace object: Service "sentry-sentry-clickhouse-replica" is invalid: spec.clusterIP: Invalid value: "": field is immutable && failed to replace object: Service "sentry-sentry-clickhouse" is invalid: spec.clusterIP: Invalid value: "": field is immutable && failed to replace object: Service "sentry-sentry-clickhouse-tabix" is invalid: spec.clusterIP: Invalid value: "": field is immutable && failed to replace object: Service "sentry-sentry-zookeeper" is invalid: spec.clusterIP: Invalid value: "": field is immutable && failed to replace object: Service "sentry-sentry-kafka" is invalid: spec.clusterIP: Invalid value: "": field is immutable && failed to replace object: Service "sentry-sentry-rabbitmq" is invalid: spec.clusterIP: Invalid value: "": field is immutable && failed to replace object: Service "sentry-sentry-web" is invalid: spec.clusterIP: Invalid value: "": field is immutable && failed to replace object: Service "sentry-sentry-snuba" is invalid: spec.clusterIP: Invalid value: "": field is immutable
hello,
it would be great to ship all statefulsets that it is possible to run in openshift
for example:
create Pod sentry-clickhouse-replica-0 in StatefulSet sentry-clickhouse-replica failed error: pods "sentry-clickhouse-replica-0" is forbidden: unable to validate against any security context constraint: [spec.containers[0].securityContext.securityContext.runAsUser: Invalid value: 0: must be in the ranges: [1016890000, 1016899999] spec.containers[0].securityContext.privileged: Invalid value: true: Privileged containers are not allowed]
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.