apache / solr-operator Goto Github PK
View Code? Open in Web Editor NEWOfficial Kubernetes operator for Apache Solr
Home Page: https://solr.apache.org/operator
License: Apache License 2.0
Official Kubernetes operator for Apache Solr
Home Page: https://solr.apache.org/operator
License: Apache License 2.0
Currently the operator only supports Solr Cloud, however there are many still using standalone Solr. Investigate the way that standalone Solr could be implemented as a separate CRD in the operator, and if the operator would give any benefit for standalone instances.
Describe the bug
In config/operators/solr_operator.yaml
image: bloomberg/solr-operator:0.1.1
However I don't see this image on Docker Hub. Results in subsequent state...
state:
waiting:
message: 'rpc error: code = Unknown desc = Error response from daemon: manifest
for bloomberg/solr-operator:0.1.1 not found'
To Reproduce
Run the examples
Expected behavior
Screenshots
If applicable, add screenshots to help explain your problem.
Environment (please complete the following information):
Describe the bug
Following the README to setup my first operator, make install deploy
fails with:
[...snip...]
kustomize build config/crd > helm/solr-operator/crds/crds.yaml
/bin/sh: kustomize: command not found
make: *** [manifests] Error 127
To Reproduce
Steps to reproduce the behavior:
make install deploy
on a clean Mac system with Docker Desktop and Go installedExpected behavior
Build succeeds
Environment (please complete the following information):
Additional context
Add any other context about the problem here.
Describe the bug
Solr registers itself in Zookeeper using 80 as port instead of 8983, making it fail when trying to contact itself.
These are the logs at the beginning:
Starting Solr
2020-02-26 09:10:35.481 INFO (main) [ ] o.e.j.u.log Logging initialized @988ms to org.eclipse.jetty.util.log.Slf4jLog
2020-02-26 09:10:35.695 WARN (main) [ ] o.e.j.s.AbstractConnector Ignoring deprecated socket close linger time
2020-02-26 09:10:35.695 WARN (main) [ ] o.e.j.x.XmlConfiguration Deprecated method public void org.eclipse.jetty.server.ServerConnector.setSoLingerTime(int) in file:///opt/solr-8.4.1/server/etc/jetty-http.xml
2020-02-26 09:10:35.706 INFO (main) [ ] o.e.j.s.Server jetty-9.4.19.v20190610; built: 2019-06-10T16:30:51.723Z; git: afcf563148970e98786327af5e07c261fda175d3; jvm 11.0.6+10
2020-02-26 09:10:35.730 INFO (main) [ ] o.e.j.d.p.ScanningAppProvider Deployment monitor [file:///opt/solr-8.4.1/server/contexts/] at interval 0
2020-02-26 09:10:35.966 INFO (main) [ ] o.e.j.w.StandardDescriptorProcessor NO JSP Support for /solr, did not find org.apache.jasper.servlet.JspServlet
2020-02-26 09:10:35.985 INFO (main) [ ] o.e.j.s.session DefaultSessionIdManager workerName=node0
2020-02-26 09:10:35.987 INFO (main) [ ] o.e.j.s.session No SessionScavenger set, using defaults
2020-02-26 09:10:35.992 INFO (main) [ ] o.e.j.s.session node0 Scavenging every 660000ms
2020-02-26 09:10:36.091 INFO (main) [ ] o.a.s.s.SolrDispatchFilter Using logger factory org.apache.logging.slf4j.Log4jLoggerFactory
2020-02-26 09:10:36.098 INFO (main) [ ] o.a.s.s.SolrDispatchFilter ___ _ Welcome to Apache Solr™ version 8.4.1
2020-02-26 09:10:36.106 INFO (main) [ ] o.a.s.s.SolrDispatchFilter / __| ___| |_ _ Starting in cloud mode on port 8983
2020-02-26 09:10:36.106 INFO (main) [ ] o.a.s.s.SolrDispatchFilter \__ \/ _ \ | '_| Install dir: /opt/solr
2020-02-26 09:10:36.107 INFO (main) [ ] o.a.s.s.SolrDispatchFilter |___/\___/_|_| Start time: 2020-02-26T09:10:36.107075Z
2020-02-26 09:10:36.107 INFO (main) [ ] o.a.s.s.SolrDispatchFilter Log level override, property solr.log.level=INFO
2020-02-26 09:10:36.148 INFO (main) [ ] o.a.s.c.SolrResourceLoader Using system property solr.solr.home: /var/solr/data
2020-02-26 09:10:36.200 INFO (main) [ ] o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
2020-02-26 09:10:51.027 INFO (zkConnectionManagerCallback-2-thread-1) [ ] o.a.s.c.c.ConnectionManager zkClient has connected
2020-02-26 09:10:51.027 INFO (main) [ ] o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
2020-02-26 09:10:51.144 INFO (main) [ ] o.a.s.s.SolrDispatchFilter Loading solr.xml from SolrHome (not found in ZooKeeper)
2020-02-26 09:10:51.148 INFO (main) [ ] o.a.s.c.SolrXmlConfig Loading container configuration from /var/solr/data/solr.xml
2020-02-26 09:10:51.211 INFO (main) [ ] o.a.s.c.SolrXmlConfig MBean server found: com.sun.jmx.mbeanserver.JmxMBeanServer@732d0d24, but no JMX reporters were configured - adding default JMX reporter.
2020-02-26 09:10:51.921 INFO (main) [ ] o.a.s.h.c.HttpShardHandlerFactory Host whitelist initialized: WhitelistHostChecker [whitelistHosts=null, whitelistHostCheckingEnabled=true]
2020-02-26 09:10:52.170 WARN (main) [ ] o.e.j.u.s.S.config Trusting all certificates configured for Client@1dd7796b[provider=null,keyStore=null,trustStore=null]
2020-02-26 09:10:52.170 WARN (main) [ ] o.e.j.u.s.S.config No Client EndPointIdentificationAlgorithm configured for Client@1dd7796b[provider=null,keyStore=null,trustStore=null]
2020-02-26 09:10:52.353 WARN (main) [ ] o.e.j.u.s.S.config Trusting all certificates configured for Client@1a2bcd56[provider=null,keyStore=null,trustStore=null]
2020-02-26 09:10:52.353 WARN (main) [ ] o.e.j.u.s.S.config No Client EndPointIdentificationAlgorithm configured for Client@1a2bcd56[provider=null,keyStore=null,trustStore=null]
2020-02-26 09:10:52.364 INFO (main) [ ] o.a.s.c.ZkContainer Zookeeper client=matching-solrcloud-zookeeper-client:2181/
2020-02-26 09:10:52.383 INFO (main) [ ] o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
2020-02-26 09:10:52.391 INFO (zkConnectionManagerCallback-9-thread-1) [ ] o.a.s.c.c.ConnectionManager zkClient has connected
2020-02-26 09:10:52.391 INFO (main) [ ] o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
2020-02-26 09:10:52.512 INFO (main) [ ] o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
2020-02-26 09:10:52.519 INFO (zkConnectionManagerCallback-11-thread-1) [ ] o.a.s.c.c.ConnectionManager zkClient has connected
2020-02-26 09:10:52.520 INFO (main) [ ] o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
2020-02-26 09:10:52.656 INFO (main) [ ] o.a.s.c.OverseerElectionContext I am going to be the leader matching-solrcloud-0.matching-solrcloud-headless:80_solr
2020-02-26 09:10:52.663 INFO (main) [ ] o.a.s.c.Overseer Overseer (id=72301757338091522-matching-solrcloud-0.matching-solrcloud-headless:80_solr-n_0000000005) starting
2020-02-26 09:10:52.822 INFO (OverseerStateUpdate-72301757338091522-matching-solrcloud-0.matching-solrcloud-headless:80_solr-n_0000000005) [ ] o.a.s.c.Overseer Starting to work on the main queue : matching-solrcloud-0.matching-solrcloud-headless:80_solr
2020-02-26 09:10:52.828 INFO (main) [ ] o.a.s.c.ZkController Register node as live in ZooKeeper:/live_nodes/matching-solrcloud-0.matching-solrcloud-headless:80_solr
2020-02-26 09:10:52.878 INFO (OverseerStateUpdate-72301757338091522-matching-solrcloud-0.matching-solrcloud-headless:80_solr-n_0000000005) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (1)
2020-02-26 09:10:52.954 INFO (main) [ ] o.a.s.p.PackageLoader /packages.json updated to version -1
2020-02-26 09:10:52.983 WARN (main) [ ] o.a.s.c.CoreContainer Not all security plugins configured! authentication=disabled authorization=disabled. Solr is only as secure as you make it. Consider configuring authentication/authorization before exposing Solr to users internal or external. See https://s.apache.org/solrsecurity for more info
2020-02-26 09:10:53.190 INFO (main) [ ] o.a.s.h.a.MetricsHistoryHandler No .system collection, keeping metrics history in memory.
2020-02-26 09:10:53.304 INFO (main) [ ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr.node' (registry 'solr.node') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@732d0d24
2020-02-26 09:10:53.305 INFO (main) [ ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr.jvm' (registry 'solr.jvm') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@732d0d24
2020-02-26 09:10:53.311 INFO (main) [ ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr.jetty' (registry 'solr.jetty') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@732d0d24
2020-02-26 09:10:53.352 INFO (main) [ ] o.a.s.c.CorePropertiesLocator Found 0 core definitions underneath /var/solr/data
2020-02-26 09:10:53.465 INFO (main) [ ] o.e.j.s.h.ContextHandler Started o.e.j.w.WebAppContext@16423501{/solr,file:///opt/solr-8.4.1/server/solr-webapp/webapp/,AVAILABLE}{/opt/solr-8.4.1/server/solr-webapp/webapp}
2020-02-26 09:10:53.477 INFO (main) [ ] o.e.j.s.AbstractConnector Started ServerConnector@388ba540{HTTP/1.1,[http/1.1, h2c]}{0.0.0.0:8983}
2020-02-26 09:10:53.477 INFO (main) [ ] o.e.j.s.Server Started @18987ms
2020-02-26 09:10:57.641 INFO (qtp1635378213-18) [ ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/info/system params={} status=0 QTime=52
2020-02-26 09:10:59.562 INFO (qtp1635378213-14) [ ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/info/system params={} status=0 QTime=7
2020-02-26 09:11:02.521 INFO (qtp1635378213-21) [ ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/info/system params={} status=0 QTime=9
2020-02-26 09:11:07.520 INFO (qtp1635378213-18) [ ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/info/system params={} status=0 QTime=8
2020-02-26 09:11:09.562 INFO (qtp1635378213-19) [ ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/info/system params={} status=0 QTime=8
2020-02-26 09:11:12.521 INFO (qtp1635378213-23) [ ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/info/system params={} status=0 QTime=8
2020-02-26 09:11:17.523 INFO (qtp1635378213-21) [ ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/info/system params={} status=0 QTime=11
2020-02-26 09:11:19.566 INFO (qtp1635378213-14) [ ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/info/system params={} status=0 QTime=12
2020-02-26 09:11:22.519 INFO (qtp1635378213-16) [ ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/info/system params={} status=0 QTime=7
2020-02-26 09:11:27.518 INFO (qtp1635378213-22) [ ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/info/system params={} status=0 QTime=6
2020-02-26 09:11:29.560 INFO (qtp1635378213-18) [ ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/info/system params={} status=0 QTime=6
2020-02-26 09:11:32.522 INFO (qtp1635378213-19) [ ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/info/system params={} status=0 QTime=7
2020-02-26 09:11:37.519 INFO (qtp1635378213-17) [ ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/info/system params={} status=0 QTime=7
2020-02-26 09:11:39.560 INFO (qtp1635378213-21) [ ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/info/system params={} status=0 QTime=6
2020-02-26 09:11:42.522 INFO (qtp1635378213-20) [ ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/info/system params={} status=0 QTime=8
2020-02-26 09:11:47.521 INFO (qtp1635378213-19) [ ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/info/system params={} status=0 QTime=8
2020-02-26 09:11:49.560 INFO (qtp1635378213-17) [ ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/info/system params={} status=0 QTime=7
2020-02-26 09:11:52.523 INFO (qtp1635378213-21) [ ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/info/system params={} status=0 QTime=11
2020-02-26 09:11:53.254 INFO (MetricsHistoryHandler-12-thread-1) [ ] o.a.s.c.s.i.SolrClientNodeStateProvider Error on getting remote info, trying again: Server refused connection at: http://matching-solrcloud-0.matching-solrcloud-headless:80/solr
2020-02-26 09:11:53.756 INFO (MetricsHistoryHandler-12-thread-1) [ ] o.a.s.c.s.i.SolrClientNodeStateProvider Error on getting remote info, trying again: Server refused connection at: http://matching-solrcloud-0.matching-solrcloud-headless:80/solr
2020-02-26 09:11:54.257 INFO (MetricsHistoryHandler-12-thread-1) [ ] o.a.s.c.s.i.SolrClientNodeStateProvider Error on getting remote info, trying again: Server refused connection at: http://matching-solrcloud-0.matching-solrcloud-headless:80/solr
2020-02-26 09:11:54.758 WARN (MetricsHistoryHandler-12-thread-1) [ ] o.a.s.c.s.i.SolrClientNodeStateProvider could not get tags from node matching-solrcloud-0.matching-solrcloud-headless:80_solr => java.lang.NullPointerException
at org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider.lambda$fetchReplicaMetrics$7(SolrClientNodeStateProvider.java:226)
java.lang.NullPointerException: null
at org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider.lambda$fetchReplicaMetrics$7(SolrClientNodeStateProvider.java:226) ~[?:?]
at java.util.HashMap.forEach(HashMap.java:1336) ~[?:?]
at org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider.fetchReplicaMetrics(SolrClientNodeStateProvider.java:225) ~[?:?]
at org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider$AutoScalingSnitch.getRemoteInfo(SolrClientNodeStateProvider.java:271) ~[?:?]
at org.apache.solr.common.cloud.rule.ImplicitSnitch.getTags(ImplicitSnitch.java:76) ~[?:?]
at org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider.fetchTagValues(SolrClientNodeStateProvider.java:139) ~[?:?]
at org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider.getNodeValues(SolrClientNodeStateProvider.java:128) ~[?:?]
at org.apache.solr.handler.admin.MetricsHistoryHandler.collectGlobalMetrics(MetricsHistoryHandler.java:506) ~[?:?]
at org.apache.solr.handler.admin.MetricsHistoryHandler.collectMetrics(MetricsHistoryHandler.java:378) ~[?:?]
at org.apache.solr.handler.admin.MetricsHistoryHandler.lambda$new$0(MetricsHistoryHandler.java:235) ~[?:?]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) ~[?:?]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305) ~[?:?]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305) ~[?:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?]
at java.lang.Thread.run(Thread.java:834) [?:?]
To Reproduce
I am creating the SolrCloud using the following yaml:
apiVersion: solr.bloomberg.com/v1beta1
kind: SolrCloud
metadata:
name: matching
namespace: default
spec:
replicas: 1
solrImage:
tag: "8"
zookeeperRef:
provided:
zookeeper:
replicas: 1
Expected behavior
I would expect the Solr instance to correctly initialize and register itself using 8983 as port.
Environment (please complete the following information):
We are running our cluster on AWS EKS, Kubernetes 1.14.
Additional context
I have also tried with versions 6 and 7 of Solr, but those had the same issue.
Is your feature request related to a problem? Please describe.
If there are three nodes and SolrCloud "replicas" is set to 3, then all 3 Solr jvms are put on the same node. For performance and HA reasons it'd be preferred to spread out the Solr jvms.
Describe the solution you'd like
It should be possible to configure pod affinity so that the Solr jvms are spread on all available nodes. The zookeeper operator does this.
I would like to be able to upload new configsets via a CRD
I imagine that this would be implemented as two additional CRDs, SolrCloudBackup
and SolrCloudRestore
.
We would likely use the Solr Collections API Backup & Restore capability.
The CRDs would take as input:
Progress and completeness would be conveyed through the Status section of each CRD.
Hi there,
We are considering using Bloomberg operator for our SOLR cluster but there are a few things we need to configure that the operator doesn't provide for us:
The Solr operator should auto-run a prometheus exporter with SolrClouds that generates metrics.
I would imagine that there would be some configuration added to the SolrCloud CRD, like tagging information for Prometheus discovery.
In general we first need an official build of the Prometheus Exporter in a Docker container.
This should probably be added in the official docker-solr repo. I will be creating an issue there to see if we can have builts of the prometheus exporter made automatically.
Is your feature request related to a problem? Please describe.
CREATEALIAS registers the alias name with the names of one or more collections provided by the command.
Probably do this via a new CRD, "SolrCollectionAlias"
Describe the solution you'd like
apiVersion: solr.bloomberg.com/v1beta1
kind: SolrCollectionAlias
metadata:
name: collection-alias-foo
spec:
solrCloud: example
collections:
- example
- example2
aliasType: standard
Is your feature request related to a problem? Please describe.
Currently there is no way for users to know which Kubernetes Nodes their Solr pods are running on, without looking directly at those pods.
Describe the solution you'd like
The SolrNodeStatus
should have a KubernetesNode
attribute:
// SolrNodeStatus is the status of a solrNode in the cloud, with readiness status
// and internal and external addresses
type SolrNodeStatus struct {
// The name of the pod running the node
Name string `json:"name"`
// The name of the Kubernetes node which the pod is running on
NodeName string `json:"nodeName"`
// An address the node can be connected to from within the Kube cluster
InternalAddress string `json:"internalAddress"`
// An address the node can be connected to from outside of the Kube cluster
// Will only be provided when an ingressUrl is provided for the cloud
// +optional
ExternalAddress string `json:"externalAddress,omitempty"`
// Is the node up and running
Ready bool `json:"ready"`
// The version of solr that the node is running
Version string `json:"version"`
}
Describe alternatives you've considered
We can also add the node name as an SysProp for the SolrCloud Node on startup. However there are other details that we need there that we will need to implement in a separate way.
Is your feature request related to a problem? Please describe.
I'd like to be able to apply a configset to a solr cluster managed through the operator.
Describe the solution you'd like
I could write a yaml file to define a configset.
Describe alternatives you've considered
Currently the only alternatives are to either build the image with the configsets or to upload one through the API before creating the configsets.
This could make it easier for people to deploy and upgrade the Solr Operator to their clusters.
Is your feature request related to a problem? Please describe.
I got stuck several times in setting up the operator, as the README goes straight into building from source and it is a bit unclear what happens.
Describe the solution you'd like
The first steps in README should not be building it from source but installing with as few steps as possible, whether using helm or directly with kubectl apply
some combined yaml that gets everything setup (like for dependencies)
So when testing this out for myself, I created kind of a tutorial that is what I'd like to have seen when visiting the repo myself. It takes you through setting up your environment (Mac only) with ingress controller, k8s dashboard and Solr install, scale, upgrade and delete. The tutorial is in this gist, feel free to steal it or parts of it into a Wiki page or something in this repo.
Is your feature request related to a problem? Please describe.
Perhaps some documentation on how to best use Horizontal Pod Autoscaler with solr-operator.
Curious, if anyone has tried this in production and what sort of results they have seen. HPA does work with StatefulSets and is able to scale down healthy nodes, but does SolrCloud handle this gracefully?
Describe the solution you'd like
Some documentation based upon end user experience with auto-scaling solrcloud on k8s with solr-operator. Some example HPA configs if that is a good option
When creating a collection I would like to be able to create and associate an autoscaling policy.
https://lucene.apache.org/solr/guide/7_7/solrcloud-autoscaling-policy-preferences.html
Describe the bug
I upgraded to latest master branch code, built a solr-operator image and tested with example/test_solrcloud.yaml locally on docker/k8s 1.15 but seeing errors
To Reproduce
Checkout lastest master, build an image and apply it to local developement environment.
Here is are the logs:
solr-operator-7d7cb75f8b-j6mng solr-operator 2019-12-03T19:09:03.154Z INFO controllers.SolrCloud Setting default settings for solr-cloud {"namespace": "default", "name": "example"}
solr-operator-7d7cb75f8b-j6mng solr-operator 2019-12-03T19:09:03.173Z INFO controllers.SolrCloud Creating Zookeeer Cluster {"namespace": "default", "name": "example-solrcloud-zookeeper"}
solr-operator-7d7cb75f8b-j6mng solr-operator 2019-12-03T19:09:03.215Z INFO controllers.SolrCloud Creating Service {"namespace": "default", "name": "example-solrcloud-common"}
solr-operator-7d7cb75f8b-j6mng solr-operator 2019-12-03T19:09:03.245Z INFO controllers.SolrCloud Creating Node Service {"namespace": "default", "name": "example-solrcloud-0"}
solr-operator-7d7cb75f8b-j6mng solr-operator 2019-12-03T19:09:03.285Z INFO controllers.SolrCloud Creating Node Service {"namespace": "default", "name": "example-solrcloud-1"}
solr-operator-7d7cb75f8b-j6mng solr-operator 2019-12-03T19:09:03.348Z INFO controllers.SolrCloud Creating Node Service {"namespace": "default", "name": "example-solrcloud-2"}
solr-operator-7d7cb75f8b-j6mng solr-operator 2019-12-03T19:09:03.387Z INFO controllers.SolrCloud Creating HeadlessService {"namespace": "default", "name": "example-solrcloud-headless"}
solr-operator-7d7cb75f8b-j6mng solr-operator 2019-12-03T19:09:03.403Z INFO controllers.SolrCloud Creating ConfigMap {"namespace": "default", "name": "example-solrcloud-configmap"}
solr-operator-7d7cb75f8b-j6mng solr-operator 2019-12-03T19:09:03.420Z INFO controllers.SolrCloud Updating SolrCloud Status: {"namespace": "default", "name": "example"}
solr-operator-7d7cb75f8b-j6mng solr-operator 2019-12-03T19:09:03.440Z DEBUG controller-runtime.controller Successfully Reconciled {"controller": "solrcloud", "request": "default/example"}
solr-operator-7d7cb75f8b-j6mng solr-operator 2019-12-03T19:09:03.440Z INFO controllers.SolrCloud Updating Zookeeer Cluster {"namespace": "default", "name": "example-solrcloud-zookeeper"}
solr-operator-7d7cb75f8b-j6mng solr-operator 2019-12-03T19:09:03.463Z INFO controllers.SolrCloud Updating SolrCloud Status: {"namespace": "default", "name": "example"}
solr-operator-7d7cb75f8b-j6mng solr-operator 2019-12-03T19:09:03.478Z ERROR controller-runtime.controller Reconciler error {"controller": "solrcloud", "request": "default/example", "error": "Operation cannot be fulfilled on solrclouds.solr.bloomberg.com \"example\": the object has been modified; please apply your changes to the latest version and try again"}
solr-operator-7d7cb75f8b-j6mng solr-operator github.com/go-logr/zapr.(*zapLogger).Error
solr-operator-7d7cb75f8b-j6mng solr-operator /go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128
solr-operator-7d7cb75f8b-j6mng solr-operator sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
solr-operator-7d7cb75f8b-j6mng solr-operator /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218
solr-operator-7d7cb75f8b-j6mng solr-operator sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
solr-operator-7d7cb75f8b-j6mng solr-operator /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:192
solr-operator-7d7cb75f8b-j6mng solr-operator sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker
solr-operator-7d7cb75f8b-j6mng solr-operator /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:171
solr-operator-7d7cb75f8b-j6mng solr-operator k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1
solr-operator-7d7cb75f8b-j6mng solr-operator /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:152
solr-operator-7d7cb75f8b-j6mng solr-operator k8s.io/apimachinery/pkg/util/wait.JitterUntil
solr-operator-7d7cb75f8b-j6mng solr-operator /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:153
solr-operator-7d7cb75f8b-j6mng solr-operator k8s.io/apimachinery/pkg/util/wait.Until
solr-operator-7d7cb75f8b-j6mng solr-operator /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:88
solr-operator-7d7cb75f8b-j6mng solr-operator 2019-12-03T19:09:04.492Z INFO controllers.SolrCloud Updating Zookeeer Cluster {"namespace": "default", "name": "example-solrcloud-zookeeper"}
solr-operator-7d7cb75f8b-j6mng solr-operator 2019-12-03T19:09:04.513Z INFO controllers.SolrCloud Creating StatefulSet {"namespace": "default", "name": "example-solrcloud"}
solr-operator-7d7cb75f8b-j6mng solr-operator 2019-12-03T19:09:04.528Z INFO controllers.SolrCloud Updating SolrCloud Status: {"namespace": "default", "name": "example"}
solr-operator-7d7cb75f8b-j6mng solr-operator 2019-12-03T19:09:04.544Z DEBUG controller-runtime.controller Successfully Reconciled {"controller": "solrcloud", "request": "default/example"}
solr-operator-7d7cb75f8b-j6mng solr-operator 2019-12-03T19:09:04.544Z INFO controllers.SolrCloud Updating Zookeeer Cluster {"namespace": "default", "name": "example-solrcloud-zookeeper"}
solr-operator-7d7cb75f8b-j6mng solr-operator 2019-12-03T19:09:04.575Z INFO controller Update required because: {"Spec.Template.Containers changed from": [{"name":"solrcloud-node","image":"library/solr:8.2.0","ports":[{"name":"solr-client","containerPort":8983,"protocol":"TCP"}],"env":[{"name":"SOLR_JAVA_MEM","value":"-Xms1g -Xmx3g"},{"name":"SOLR_HOME","value":"/var/solr/data"},{"name":"SOLR_PORT","value":"8983"},{"name":"POD_HOSTNAME","valueFrom":{"fieldRef":{"apiVersion":"v1","fieldPath":"metadata.name"}}},{"name":"SOLR_HOST","value":"$(POD_HOSTNAME).example-solrcloud-headless"},{"name":"ZK_HOST","value":"10.107.18.247:2181/"},{"name":"SOLR_LOG_LEVEL","value":"INFO"},{"name":"SOLR_OPTS","value":"-Dsolr.autoSoftCommit.maxTime=10000"},{"name":"GC_TUNE","value":"-XX:SurvivorRatio=4 -XX:TargetSurvivorRatio=90 -XX:MaxTenuringThreshold=8"}],"resources":{"limits":{"memory":"1G"},"requests":{"cpu":"65m","memory":"156Mi"}},"volumeMounts":[{"name":"data","mountPath":"/var/solr/data"}],"livenessProbe":{"httpGet":{"path":"/solr/admin/info/system","port":8983,"scheme":"HTTP"},"initialDelaySeconds":20,"timeoutSeconds":1,"periodSeconds":10,"successThreshold":1,"failureThreshold":3},"readinessProbe":{"httpGet":{"path":"/solr/admin/info/system","port":8983,"scheme":"HTTP"},"initialDelaySeconds":15,"timeoutSeconds":1,"periodSeconds":5,"successThreshold":1,"failureThreshold":3},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"}], "To:": [{"name":"solrcloud-node","image":"library/solr:8.2.0","ports":[{"name":"solr-client","containerPort":8983,"protocol":"TCP"}],"env":[{"name":"SOLR_JAVA_MEM","value":"-Xms1g -Xmx3g"},{"name":"SOLR_HOME","value":"/var/solr/data"},{"name":"SOLR_PORT","value":"8983"},{"name":"POD_HOSTNAME","valueFrom":{"fieldRef":{"apiVersion":"v1","fieldPath":"metadata.name"}}},{"name":"SOLR_HOST","value":"$(POD_HOSTNAME).example-solrcloud-headless"},{"name":"ZK_HOST"},{"name":"SOLR_LOG_LEVEL","value":"INFO"},{"name":"SOLR_OPTS","value":"-Dsolr.autoSoftCommit.maxTime=10000"},{"name":"GC_TUNE","value":"-XX:SurvivorRatio=4 -XX:TargetSurvivorRatio=90 -XX:MaxTenuringThreshold=8"}],"resources":{"limits":{"memory":"1G"},"requests":{"cpu":"65m","memory":"156Mi"}},"volumeMounts":[{"name":"data","mountPath":"/var/solr/data"}],"livenessProbe":{"httpGet":{"path":"/solr/admin/info/system","port":8983,"scheme":"HTTP"},"initialDelaySeconds":20,"timeoutSeconds":1,"periodSeconds":10,"successThreshold":1,"failureThreshold":3},"readinessProbe":{"httpGet":{"path":"/solr/admin/info/system","port":8983,"scheme":"HTTP"},"initialDelaySeconds":15,"timeoutSeconds":1,"periodSeconds":5,"successThreshold":1,"failureThreshold":3},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"}]}
solr-operator-7d7cb75f8b-j6mng solr-operator 2019-12-03T19:09:04.575Z INFO controllers.SolrCloud Updating StatefulSet {"namespace": "default", "name": "example-solrcloud"}
solr-operator-7d7cb75f8b-j6mng solr-operator 2019-12-03T19:09:04.601Z ERROR controller-runtime.controller Reconciler error {"controller": "solrcloud", "request": "default/example", "error": "Operation cannot be fulfilled on statefulsets.apps \"example-solrcloud\": the object has been modified; please apply your changes to the latest version and try again"}
solr-operator-7d7cb75f8b-j6mng solr-operator github.com/go-logr/zapr.(*zapLogger).Error
solr-operator-7d7cb75f8b-j6mng solr-operator /go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128
solr-operator-7d7cb75f8b-j6mng solr-operator sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
solr-operator-7d7cb75f8b-j6mng solr-operator /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218
solr-operator-7d7cb75f8b-j6mng solr-operator sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
solr-operator-7d7cb75f8b-j6mng solr-operator /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:192
solr-operator-7d7cb75f8b-j6mng solr-operator sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker
solr-operator-7d7cb75f8b-j6mng solr-operator /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:171
solr-operator-7d7cb75f8b-j6mng solr-operator k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1
solr-operator-7d7cb75f8b-j6mng solr-operator /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:152
solr-operator-7d7cb75f8b-j6mng solr-operator k8s.io/apimachinery/pkg/util/wait.JitterUntil
solr-operator-7d7cb75f8b-j6mng solr-operator /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:153
solr-operator-7d7cb75f8b-j6mng solr-operator k8s.io/apimachinery/pkg/util/wait.Until
solr-operator-7d7cb75f8b-j6mng solr-operator /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:88
solr-operator-7d7cb75f8b-j6mng solr-operator 2019-12-03T19:09:05.602Z INFO controller Update required because: {"Spec.Template.Containers changed from": [{"name":"solrcloud-node","image":"library/solr:8.2.0","ports":[{"name":"solr-client","containerPort":8983,"protocol":"TCP"}],"env":[{"name":"SOLR_JAVA_MEM","value":"-Xms1g -Xmx3g"},{"name":"SOLR_HOME","value":"/var/solr/data"},{"name":"SOLR_PORT","value":"8983"},{"name":"POD_HOSTNAME","valueFrom":{"fieldRef":{"apiVersion":"v1","fieldPath":"metadata.name"}}},{"name":"SOLR_HOST","value":"$(POD_HOSTNAME).example-solrcloud-headless"},{"name":"ZK_HOST","value":"10.107.18.247:2181/"},{"name":"SOLR_LOG_LEVEL","value":"INFO"},{"name":"SOLR_OPTS","value":"-Dsolr.autoSoftCommit.maxTime=10000"},{"name":"GC_TUNE","value":"-XX:SurvivorRatio=4 -XX:TargetSurvivorRatio=90 -XX:MaxTenuringThreshold=8"}],"resources":{"limits":{"memory":"1G"},"requests":{"cpu":"65m","memory":"156Mi"}},"volumeMounts":[{"name":"data","mountPath":"/var/solr/data"}],"livenessProbe":{"httpGet":{"path":"/solr/admin/info/system","port":8983,"scheme":"HTTP"},"initialDelaySeconds":20,"timeoutSeconds":1,"periodSeconds":10,"successThreshold":1,"failureThreshold":3},"readinessProbe":{"httpGet":{"path":"/solr/admin/info/system","port":8983,"scheme":"HTTP"},"initialDelaySeconds":15,"timeoutSeconds":1,"periodSeconds":5,"successThreshold":1,"failureThreshold":3},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"}], "To:": [{"name":"solrcloud-node","image":"library/solr:8.2.0","ports":[{"name":"solr-client","containerPort":8983,"protocol":"TCP"}],"env":[{"name":"SOLR_JAVA_MEM","value":"-Xms1g -Xmx3g"},{"name":"SOLR_HOME","value":"/var/solr/data"},{"name":"SOLR_PORT","value":"8983"},{"name":"POD_HOSTNAME","valueFrom":{"fieldRef":{"apiVersion":"v1","fieldPath":"metadata.name"}}},{"name":"SOLR_HOST","value":"$(POD_HOSTNAME).example-solrcloud-headless"},{"name":"ZK_HOST"},{"name":"SOLR_LOG_LEVEL","value":"INFO"},{"name":"SOLR_OPTS","value":"-Dsolr.autoSoftCommit.maxTime=10000"},{"name":"GC_TUNE","value":"-XX:SurvivorRatio=4 -XX:TargetSurvivorRatio=90 -XX:MaxTenuringThreshold=8"}],"resources":{"limits":{"memory":"1G"},"requests":{"cpu":"65m","memory":"156Mi"}},"volumeMounts":[{"name":"data","mountPath":"/var/solr/data"}],"livenessProbe":{"httpGet":{"path":"/solr/admin/info/system","port":8983,"scheme":"HTTP"},"initialDelaySeconds":20,"timeoutSeconds":1,"periodSeconds":10,"successThreshold":1,"failureThreshold":3},"readinessProbe":{"httpGet":{"path":"/solr/admin/info/system","port":8983,"scheme":"HTTP"},"initialDelaySeconds":15,"timeoutSeconds":1,"periodSeconds":5,"successThreshold":1,"failureThreshold":3},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"}]}
solr-operator-7d7cb75f8b-j6mng solr-operator 2019-12-03T19:09:05.603Z INFO controllers.SolrCloud Updating StatefulSet {"namespace": "default", "name": "example-solrcloud"}
solr-operator-7d7cb75f8b-j6mng solr-operator 2019-12-03T19:09:05.628Z INFO controllers.SolrCloud Updating SolrCloud Status: {"namespace": "default", "name": "example"}
solr-operator-7d7cb75f8b-j6mng solr-operator 2019-12-03T19:09:05.662Z DEBUG controller-runtime.controller Successfully Reconciled {"controller": "solrcloud", "request": "default/example"}
solr-operator-7d7cb75f8b-j6mng solr-operator 2019-12-03T19:09:05.663Z INFO controllers.SolrCloud Updating Zookeeer Cluster {"namespace": "default", "name": "example-solrcloud-zookeeper"}
solr-operator-7d7cb75f8b-j6mng solr-operator 2019-12-03T19:09:05.678Z DEBUG controller-runtime.controller Successfully Reconciled {"controller": "solrcloud", "request": "default/example"}
solr-operator-7d7cb75f8b-j6mng solr-operator 2019-12-03T19:09:05.679Z DEBUG controller-runtime.controller Successfully Reconciled {"controller": "solrcloud", "request": "default/example"}
solr-operator-7d7cb75f8b-j6mng solr-operator 2019-12-03T19:09:06.742Z INFO controllers.SolrCloud Updating Zookeeer Cluster {"namespace": "default", "name": "example-solrcloud-zookeeper"}
solr-operator-7d7cb75f8b-j6mng solr-operator 2019-12-03T19:09:06.758Z DEBUG controller-runtime.controller Successfully Reconciled {"controller": "solrcloud", "request": "default/example"}
solr-operator-7d7cb75f8b-j6mng solr-operator 2019-12-03T19:09:06.759Z DEBUG controller-runtime.controller Successfully Reconciled {"controller": "solrcloud", "request": "default/example"}
solr-operator-7d7cb75f8b-j6mng solr-operator 2019-12-03T19:09:07.799Z INFO controllers.SolrCloud Updating Zookeeer Cluster {"namespace": "default", "name": "example-solrcloud-zookeeper"}
solr-operator-7d7cb75f8b-j6mng solr-operator 2019-12-03T19:09:07.813Z DEBUG controller-runtime.controller Successfully Reconciled {"controller": "solrcloud", "request": "default/example"}
solr-operator-7d7cb75f8b-j6mng solr-operator 2019-12-03T19:09:07.815Z DEBUG controller-runtime.controller Successfully Reconciled {"controller": "solrcloud", "request": "default/example"}
solr-operator-7d7cb75f8b-j6mng solr-operator 2019-12-03T19:09:07.831Z INFO controllers.SolrCloud Updating Zookeeer Cluster {"namespace": "default", "name": "example-solrcloud-zookeeper"}
solr-operator-7d7cb75f8b-j6mng solr-operator 2019-12-03T19:09:07.841Z DEBUG controller-runtime.controller Successfully Reconciled {"controller": "solrcloud", "request": "default/example"}
solr-operator-7d7cb75f8b-j6mng solr-operator 2019-12-03T19:09:07.842Z DEBUG controller-runtime.controller Successfully Reconciled {"controller": "solrcloud", "request": "default/example"}
solr-operator-7d7cb75f8b-j6mng solr-operator 2019-12-03T19:09:08.879Z INFO controllers.SolrCloud Updating Zookeeer Cluster {"namespace": "default", "name": "example-solrcloud-zookeeper"}
solr-operator-7d7cb75f8b-j6mng solr-operator 2019-12-03T19:09:08.897Z DEBUG controller-runtime.controller Successfully Reconciled {"controller": "solrcloud", "request": "default/example"}
solr-operator-7d7cb75f8b-j6mng solr-operator 2019-12-03T19:09:08.898Z DEBUG controller-runtime.controller Successfully Reconciled {"controller": "solrcloud", "request": "default/example"}
---snip---
remains in reconcile update loop. Appears to be an issue with the zookeeper statefulset being modified post creation. Not sure what changed with zk-operator with this last merge. Need to debug further. Anyone else having this issue?
Is your feature request related to a problem? Please describe.
I am creating this issue to track the implementation of using zkstatereader to query for collections and other resources directly to zookeeper instead of solr to keep the number of queries to solr low.
Describe the solution you'd like
Instead of rewriting the zkstatereader.java monster in golang, how about using the java library itself in the solr operator?
I was able to prototype this out by creating a simple rest interface on top of zkstatereader. You can check it out in solr-zkstatereader-rest repo and on swarupdonepudi/zkreader dockerhub repo
Basically the idea is to run this zkstatereader container as a side car container to the solr-operator and send queries to this container on localhost for all the queries that zkstatereader
can answer. This was we dont looseout on the actual code base updates to this library either. We simply bump up the maven dependency version and rebuild the image.
Describe alternatives you've considered
The other alternative is to actually implement this in golang. But it is going to take us forever to do that and does not really give any additional value.
Additional context
The next step is to create a ci/cd process for the solr-zkstatereader-rest
on travis and updating the solr-operator to set this one up as side car container in the solr-operator pod. I will be looking forward for feedback from @HoustonPutman and @sepulworld on this one.
Describe the bug
We should be able to use a zk-operator generated zk cluster and specify what chRoot to use for the solrcloud (I would like to not use '/' but perhaps use '/{name_of_cloud}'
To Reproduce
Stand by, Ill post the test yamls I have tried to use unsuccessfully
Expected behavior
ZK_HOST/<custom_chroot>
Currently the operator requires you to have already registered your Solr CRDs in Kubernetes. It would be nice if the operator could load in the Solr CRDs that are not already registered.
Describe the bug
When creating a collection and not specifying routerField it's set to the empty string which prevents anything from being indexed.
To Reproduce
apiVersion: solr.bloomberg.com/v1beta1
kind: SolrCollection
metadata:
name: example-col
spec:
solrCloud: example
collection: example-col
routerName: compositeId
autoAddReplicas: false
numShards: 1
replicationFactor: 1
maxShardsPerNode: 1
If I apply the above config I will not be able to index anything in the collection as router.field will be set to empty string and Solr will throw "SolrException: No value for :. Unable to identify shard"
Expected behavior
router.field should not be set at all if it's missing in the config
Is your feature request related to a problem? Please describe.
persistentVolumeReclaimPolicy isn't an option under PersistentVolumeClaimSpec v1 core I would like to be able to set it to "Delete" for our solrcloud test pipelines where I don't want the PVCs sticking around after deletion.
What would it take to be able set different values for persistentVolumeReclaimPolicy?
Do we need orphan pvc cleanup code logic like I see in zookeeper-operator? Or will just setting this do the cleanup for us?
Describe the solution you'd like
Ability to configure under dataPvcSpec "Delete" for persistentVolumeReclaimPolicy
Describe alternatives you've considered
In the meantime ill write a cleanup script outside of operator in our pipeline.
Describe the bug
The current example/test_solrprometheusexporter.yaml doesn't work. The spec reference (https://github.com/bloomberg/solr-operator/blob/master/example/test_solrprometheusexporter.yaml#L10) doesn't exist in types. Also, from within the cluster the ingress doesn't connect.
solr@solrprometheusexporter-sample-solr-metrics-686b9df785-5kjwt:/opt/solr-8.2.0$ curl http://default-example-solrcloud-0.ing.local.domain:80/solr
curl: (7) Failed to connect to default-example-solrcloud-0.ing.local.domain port 80: Connection refused
I suggest we switch the example over to using the service endpoint. Seems like this is the more reasonable way for a pod to talk to a service within the cluster anyways.
Additional context
Perhaps update example/test_solrprometheusexporter.yaml to:
apiVersion: solr.bloomberg.com/v1beta1
kind: SolrPrometheusExporter
metadata:
labels:
controller-tools.k8s.io: "1.0"
name: solrprometheusexporter-example
spec:
solrReference:
standalone:
address: "http://example-solrcloud-common.default/solr"
image:
tag: 8.2.0
Open to other ideas here. I can create a PR once we get to kubebuilder v2 done based upon feedback here.
Describe the bug
When creating a SolrCloud the example-solrcloud-0 pod never becomes ready.
To Reproduce
Create a simple SolrCloud example with operator args:
In the logs of the pod example-solrcloud-0 this is found:
java.lang.IllegalArgumentException: A HostProvider may not be empty!
at org.apache.zookeeper.client.StaticHostProvider.<init>(StaticHostProvider.java:88)
at org.apache.zookeeper.ZooKeeper.<init>(ZooKeeper.java:449)
at org.apache.zookeeper.ZooKeeper.<init>(ZooKeeper.java:383)
kubectl shows this:
pod/example-solrcloud-0 0/1 Running 0 3m
The solr pod never becomes ready. When checking the pod it says ZK_HOST is "/". When watching the solr statefulset it looks like the pod template has a correct ZK_HOST. But currentRevision and updateRevision for the solr statefulset differ so it's likely that the first revision of the statefulset has an invalid ZK_HOST and the second revision has a correct one. But the pod is stuck with the first revision of the statefulset and can't recover. Kubernetes tries to restart the pod many times, but it's always the same error because ZK_HOST is still "/".
A workaround is to delete the example-solrcloud-0 pod, the installation will then continue and succeed.
Is your feature request related to a problem? Please describe.
We aim to run Solr in Amazon EKS (managed Kubernetes). We have microservices in an Amazon VPC that wants to communicate with Solr and Zookeeper running in a Kubernetes cluster that belongs to the same VPC. Amazon EKS exposes pod IPs in the VPC, but not service IPs. So our microservices (outside of Kubernetes but in the same VPC) can only talk to Solr pods, not Solr services. The Solr Operator can't easily be configured to enable this use-case.
Describe the solution you'd like
Possibility to configure template like expression for hostnames: {{ .Name }}.{{ .Namespace }}.ourdomain.com
Possibility to configure service port. It's now hardcoded to 80.
Additional context
By using ExternalDNS (https://github.com/kubernetes-incubator/external-dns) and annotating the Solr and Zookeeper headless services we can get Solr and Zookeeper pod IPs (VPC IPs, not Kubernetes internal IPs) registered in DNS (Route53). Our microservices can use these DNS records to talk to Solr pods.
As an example, let's say we create a solrcloud "example" with ingress-base-domain=ourdomain.com, namespace=default, ExternalDNS enabled, the result is then
hosts registered in zookeeper:
default-example-solrcloud-0.ourdomain.com:80
default-example-solrcloud-zookeeper-0.ourdomain.com:80
pods:
example-solrcloud-0 8983
example-solrcloud-zookeeper-0 8983
services:
example-solrcloud-0 80
example-solrcloud-zookeeper-0 80
dns:
example-solrcloud-0.ourdomain.com
example-solrcloud-zookeeper-0.ourdomain.com
With the help of ExternalDNS our own microservices can now use example-solrcloud-zookeeper-0.ourdomain.com:2181 as address to zookeeper. It will resolve to the Zookeeper pod IP in the VPC. However, the solrj client will get the state of the cluster from Zookeeper and it says all hosts are prefixed with "default-". So solrj tries to use the hostname default-example-solrcloud-0.ourdomain.com to talk to solr. But this host does not exist in DNS. If it was possible to configure the Solr Operator how the hostnames are generated it would be possible to skip the namespace and generate the same hostnames as ExternalDNS (example-solrcloud-0.ourdomain.com).
With this change our own microservices could now get a working Solr hostname and IP. But this will still not work because the port is wrong. The Solr pods are reachable on port 8983, but Zookeeper will tell the clients that Solr can be reached on port 80 (because that is the internal Kubernetes service port). If it was possible to configure the Solr Operator to have 8983 as the service port rather than 80 this communication would work.
This way the hostname/port is the same for external and internal communication, but they are resolved to pods for external access and services for internal access. We have tested to implement this and it seems to work. No ingresses or loadbalancers are needed.
Add ability for the operator to create solr collections
@sepulworld recently found an issue with using the status subResource for the SolrCloud CRD with Kubernetes 1.10.
We should have a list of kubernetes versions, the compatibility and any feature gates that are required for the operator to work on that kubernetes version.
**Is your feature request related to a problem? Would like to be able to specify the router.field for a collection.
"If this field is specified, the router will look at the value of the field in an input document to compute the hash and identify a shard instead of looking at the uniqueKey field. If the field specified is null in the document, the document will be rejected. Please note that RealTime Get or retrieval by id would also require the parameter route (or shard.keys) to avoid a distributed search."
Describe the solution you'd like
Add to SolrCollection spec make this an optional parameter for creation of a collection. This parameter is not modifiable after the collection has been created.
I am presently looking to add this, and will have a PR if you want to assign to me
Right now both accept a cloud in the form of a string. This is meant to be the name of a solrCloud resource in the same namespace as the collection/alias resource referencing it.
We should make these more flexible, like the PrometheusExporter CRD.
So something like:
spec:
solr:
kubeSolr:
name: test
namespace: test
cloudUrl: test.com:8983/solr
zkConnectionInformation:
connectionString: test1:2181,test2:2181
chRoot: /test
All of these options would be optional, and would have defaults. But one of kubeSolr
, cloudUrl
or zkConnectionInformation
(names not set in stone) would have to be specified.
Describe the bug
While deploying the Solr operator on our K8s cluster environment, operator pod is failing with CrashLoopBackOff error.
To Reproduce
Steps to reproduce the behavior:
NAME READY STATUS RESTARTS AGE
pod/solr-operator-b658b985b-z87h7 0/1 CrashLoopBackOff 28 126m
pod/zk-operator-5c769d7747-w2jv8 1/1 Running 0 130m
---kubectl logs pod/solr-operator-b658b985b-z87h7
{"level":"info","ts":1572414608.8584437,"logger":"entrypoint","msg":"solr-operator Version: 0.2.1"}
{"level":"info","ts":1572414608.8585217,"logger":"entrypoint","msg":"solr-operator Git SHA: "}
{"level":"info","ts":1572414608.8585274,"logger":"entrypoint","msg":"Go Version: go1.12.5"}
{"level":"info","ts":1572414608.8585312,"logger":"entrypoint","msg":"Go OS/Arch: linux / amd64"}
{"level":"info","ts":1572414608.8585346,"logger":"entrypoint","msg":"setting up client for manager"}
{"level":"info","ts":1572414608.859087,"logger":"entrypoint","msg":"setting up manager"}
{"level":"info","ts":1572414609.1637375,"logger":"entrypoint","msg":"Registering Components."}
{"level":"info","ts":1572414609.1637914,"logger":"entrypoint","msg":"setting up scheme"}
{"level":"info","ts":1572414609.255684,"logger":"entrypoint","msg":"Setting up controller"}
{"level":"info","ts":1572414609.2559223,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"solrbackup-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1572414609.25624,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"solrbackup-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1572414609.2564533,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"solrcloud-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1572414609.2565868,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"solrcloud-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1572414609.256682,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"solrcloud-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1572414609.2567837,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"solrcloud-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1572414609.2568824,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"solrcloud-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1572414609.2569659,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"solrcloud-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1572414609.2570605,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"solrcloud-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1572414609.257194,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"solrcollection-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1572414609.25732,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"solrprometheusexporter-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1572414609.2574298,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"solrprometheusexporter-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1572414609.2574573,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"solrprometheusexporter-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1572414609.2574737,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"solrprometheusexporter-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1572414609.2574875,"logger":"entrypoint","msg":"setting up webhooks"}
{"level":"info","ts":1572414609.257492,"logger":"entrypoint","msg":"Starting the Cmd."}
Expected behavior
As per getting started steps, solr operator should be running once we deploy Zookeeper operator(optional) but solr operator is failing to start.
Environment (please complete the following information):
Describe the bug
When deleting a collection the command hangs and solr operator log shows a panic.
To Reproduce
kubectl delete -f solrcollection.yaml
The log says
2020-03-25T17:05:28.980Z DPANIC controller odd number of arguments passed as key-value pairs for logging {"ignored key": "collection"}
github.com/go-logr/zapr.handleFields
/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:106
github.com/go-logr/zapr.(*zapLogger).Error
/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:129
github.com/bloomberg/solr-operator/controllers/util.DeleteCollection
/workspace/controllers/util/collection_util.go:106
github.com/bloomberg/solr-operator/controllers.(*SolrCollectionReconciler).Reconcile
/workspace/controllers/solrcollection_controller.go:99
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
Is your feature request related to a problem? Please describe.
Be able to configure the backup to run on a scheduled basis. Right now it is a one time run.
I cannot find a way to make this operator spread across AZs in AWS, any suggestions would be welcome.
More info
bitnami/charts#927
https://kubernetes.io/docs/concepts/storage/storage-classes/#volume-binding-mode
Is your feature request related to a problem? Please describe.
Add support for ImagePullSecrects for PrometheusExporter container spec similar to what we now do with Solr container spec
Describe the solution you'd like
Should be an optional spec input
We currently have fairly good testing around creating new Solr Clouds & Prometheus Exporters. However, the unit tests do not check that values of dependent resources are correctly updated when the Spec of a cloud/exporter is changed.
Desired Solution
Have some tests that make sure that a cloud is set up correctly, then make sure that when values are changed, the underlying kubernetes state changes along with it.
Feature:
Currently there is no way to upload configset for a solrcloud created using SolrCloud resource.
It would be nice if configsets for solrclouds are also managed by the solr-operator
Current Alternative:
Currently we are using https://lucene.apache.org/solr/guide/8_2/configsets-api.html solr api to upload a configset. One big limitation with the solr configs-api is that it is not possible to update an existing configset previously uploaded. Workaround is to create a new configset and reload collections to use the new configset as described here - https://dev.lucene.apache.narkive.com/HI7lTF0o/jira-created-solr-12925-configsets-api-should-allow-update-of-existing-configset#post4
It is possible to upload configset directly to zookeeper. However, we would need a zookeeper library in golang which is capable of uploading a config folder to a zookeeper instance. using solj library which is written in java.
The best zk client/sdk in golang seems to be https://github.com/samuel/go-zookeeper/tree/master/zk but from the initial looks it does not seem like it has capability to upload config to zookeeper.
There is a chicken/egg problem with restarting SolrCloud pods.
readinessCheck
.So if we want both of these to be true, then the SolrNode services (both individual node services, and the headless service) need to route to pods regardless of whether that node is ready or not. This can be achieved by using the following option:
Service:
Spec:
PublishNotReadyAddresses: true
Hi All,
We are having below error in solr-operator.
DPANIC controller odd number of arguments passed as key-value pairs for logging {"ignored key": ""}
github.com/go-logr/zapr.handleFields
/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:106
github.com/go-logr/zapr.(*infoLogger).Info
/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:70
github.com/bloomberg/solr-operator/controllers/util.CreateCollection
/workspace/controllers/util/collection_util.go:46
github.com/bloomberg/solr-operator/controllers.reconcileSolrCollection
/workspace/controllers/solrcollection_controller.go:172
github.com/bloomberg/solr-operator/controllers.(*SolrCollectionReconciler).Reconcile
/workspace/controllers/solrcollection_controller.go:69
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:256
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:232
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker
/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:211
k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:152
k8s.io/apimachinery/pkg/util/wait.JitterUntil
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:153
k8s.io/apimachinery/pkg/util/wait.Until
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:88
E0309 01:40:20.090053 1 runtime.go:76] Observed a panic: odd number of arguments passed as key-value pairs for logging
goroutine 359 [running]:
k8s.io/apimachinery/pkg/util/runtime.logPanic(0x13a1880, 0xc000647740)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:74 +0xa3
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:48 +0x82
panic(0x13a1880, 0xc000647740)
/usr/local/go/src/runtime/panic.go:679 +0x1b2
go.uber.org/zap/zapcore.(*CheckedEntry).Write(0xc0000efb80, 0xc0003aa780, 0x1, 0x1)
/go/pkg/mod/go.uber.org/[email protected]/zapcore/entry.go:229 +0x546
go.uber.org/zap.(*Logger).DPanic(0xc00007b2c0, 0x15fa803, 0x3d, 0xc0003aa780, 0x1, 0x1)
/go/pkg/mod/go.uber.org/[email protected]/logger.go:215 +0x7f
github.com/go-logr/zapr.handleFields(0xc00007b2c0, 0xc000647700, 0x1, 0x1, 0x0, 0x0, 0x0, 0x7f8904d9e008, 0x0, 0x0)
/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:106 +0x5ce
github.com/go-logr/zapr.(*infoLogger).Info(0xc0002ef2e8, 0x15fc4a1, 0x3e, 0xc000647700, 0x1, 0x1)
/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:70 +0xb1
github.com/bloomberg/solr-operator/controllers/util.CreateCollection(0xc0003d30c0, 0x7, 0xc000491920, 0x12, 0x1, 0x1, 0x0, 0x1, 0x0, 0x0, ...)
/workspace/controllers/util/collection_util.go:46 +0x5c4
github.com/bloomberg/solr-operator/controllers.reconcileSolrCollection(0xc000319740, 0xc000330540, 0x1, 0x1, 0x0, 0x1, 0x0, 0x0, 0x0, 0x0, ...)
/workspace/controllers/solrcollection_controller.go:172 +0x2b8
github.com/bloomberg/solr-operator/controllers.(*SolrCollectionReconciler).Reconcile(0xc000319740, 0xc000056ce9, 0x7, 0xc000491920, 0x12, 0xc00047fcd8, 0xc000312cf0, 0xc0000f23f8, 0x1795120)
/workspace/controllers/solrcollection_controller.go:69 +0x375
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler(0xc000336b40, 0x1458380, 0xc0003adf40, 0xc00040cd00)
/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:256 +0x162
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc000336b40, 0x0)
/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:232 +0xcb
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker(0xc000336b40)
/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:211 +0x2b
k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc000647640)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:152 +0x5e
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000647640, 0x3b9aca00, 0x0, 0x1, 0xc000316180)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:153 +0xf8
k8s.io/apimachinery/pkg/util/wait.Until(0xc000647640, 0x3b9aca00, 0xc000316180)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:88 +0x4d
created by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1
/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:193 +0x328
panic: odd number of arguments passed as key-value pairs for logging [recovered]
panic: odd number of arguments passed as key-value pairs for logging
goroutine 359 [running]:
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:55 +0x105
panic(0x13a1880, 0xc000647740)
/usr/local/go/src/runtime/panic.go:679 +0x1b2
go.uber.org/zap/zapcore.(*CheckedEntry).Write(0xc0000efb80, 0xc0003aa780, 0x1, 0x1)
/go/pkg/mod/go.uber.org/[email protected]/zapcore/entry.go:229 +0x546
go.uber.org/zap.(*Logger).DPanic(0xc00007b2c0, 0x15fa803, 0x3d, 0xc0003aa780, 0x1, 0x1)
/go/pkg/mod/go.uber.org/[email protected]/logger.go:215 +0x7f
github.com/go-logr/zapr.handleFields(0xc00007b2c0, 0xc000647700, 0x1, 0x1, 0x0, 0x0, 0x0, 0x7f8904d9e008, 0x0, 0x0)
/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:106 +0x5ce
github.com/go-logr/zapr.(*infoLogger).Info(0xc0002ef2e8, 0x15fc4a1, 0x3e, 0xc000647700, 0x1, 0x1)
/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:70 +0xb1
github.com/bloomberg/solr-operator/controllers/util.CreateCollection(0xc0003d30c0, 0x7, 0xc000491920, 0x12, 0x1, 0x1, 0x0, 0x1, 0x0, 0x0, ...)
/workspace/controllers/util/collection_util.go:46 +0x5c4
github.com/bloomberg/solr-operator/controllers.reconcileSolrCollection(0xc000319740, 0xc000330540, 0x1, 0x1, 0x0, 0x1, 0x0, 0x0, 0x0, 0x0, ...)
/workspace/controllers/solrcollection_controller.go:172 +0x2b8
github.com/bloomberg/solr-operator/controllers.(*SolrCollectionReconciler).Reconcile(0xc000319740, 0xc000056ce9, 0x7, 0xc000491920, 0x12, 0xc00047fcd8, 0xc000312cf0, 0xc0000f23f8, 0x1795120)
/workspace/controllers/solrcollection_controller.go:69 +0x375
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler(0xc000336b40, 0x1458380, 0xc0003adf40, 0xc00040cd00)
/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:256 +0x162
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc000336b40, 0x0)
/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:232 +0xcb
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker(0xc000336b40)
/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:211 +0x2b
k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc000647640)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:152 +0x5e
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000647640, 0x3b9aca00, 0x0, 0x1, 0xc000316180)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:153 +0xf8
k8s.io/apimachinery/pkg/util/wait.Until(0xc000647640, 0x3b9aca00, 0xc000316180)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:88 +0x4d
created by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1
/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:193 +0x328
Describe the bug
I'd like to have a custom solr.xml but everytime I try to update the configmap it just resets back to the original version. I know that this is the operator reconciling the resources back to a desired state but it also removes the ability to configure the solr.xml. This happens even if the configmap existed before the solr cluster was created.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
Depends on how you all want to solve this. IMO you should be able to deploy a configmap with your solr cluster and reference it in the CRD. Similar to how you can connect to an existing ZK Cluster by passing in a connection string.
You could also solve this by passing the values present in the solr config into the yaml file and letting it be updated there (i.e. adding a "config" block for solr that would map to values in the solr xml.)
Environment (please complete the following information):
Solr 8.4.1 and 7.7.2, EKS v1.14
Describe the bug
On creating a solr collection via this crd:
apiVersion: solr.bloomberg.com/v1beta1
kind: SolrCollection
metadata:
name: escenic
namespace: solr
spec:
solrCloud: escenic
collection: escenic
routerName: compositeId
routerField: id
autoAddReplicas: true
numShards: 5
replicationFactor: 2
maxShardsPerNode: 1
collectionConfigName: escenic
I'm getting an error:
{
"namespace": "solr",
"cloud": "escenic",
"collection": "escenic",
"error": "Recieved bad response code of 400 from solr with response: {\"responseHeader\":{\"status\":400,\"QTime\":3},\"error\":{\"metadata\":[\"error-class\",\"org.apache.solr.common.SolrException\",\"root-error-class\",\"org.apache.solr.common.SolrException\"],\"msg\":\"Collection: escenic not found\",\"code\":400}}\n"
}
This happens because it tries to call solr by the headless service ("$(POD_HOSTNAME)." + solrCloud.HeadlessServiceName()) which is not available in the cluster.
In this case the call to http://escenic-solrcloud-3.escenic-solrcloud-headless:80/solr
gets a connection refused, while the call to http://escenic-solrcloud-3.sorl:80/solr
(the none-headless service) is completly fine.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
To see the collection be created or to use at least services that exist. Maybe I'm missing some step to create the expected services. Or it should be possible to to use the existing svc (pod-hostnema.namespace) instead, but when I change that in the yaml, it gets immediately reverted to "$(POD_HOSTNAME)." + solrCloud.HeadlessServiceName() (which is good, but doesn't work for me)
Environment (please complete the following information):
Some CRDs for this operator, especially the SolrCloud CRD, are getting extrememly large. This is because we are importing options from base kubernetes objects, which expand out to be extremely large.
Because these files are much too big to parse, we should have documentation for each CRD, explaining at least the basics of using each one. Eventually we can hopefully start to use links to the other types of resources, so they don't have to be explicitly listed in our CRDs. But for now I think the solution is just to add better documentation.
Describe the bug
The CreateCollection function is missing the optional maxShardsPerNode parameter, while ModifyCollection function has it.
Expected behavior
We should include this optional parameter in the CreateCollection function.
Logging bug as issue, will fix once we get to kubebuilder v2
Describe the bug
Followed steps in Readme for example setup on Docker for Mac (Version 2.0.0.3 (31259))
To Reproduce
Steps to reproduce the behavior:
Switched to 0.1.0 for solr-operator and updated args to work with 0.1.0
Updated args:
- -etcd-operator=false
- -zk-operator
- -ingress-base-domain=localhost
Example solrcloud to deploy:
apiVersion: solr.bloomberg.com/v1beta1
kind: SolrCloud
metadata:
name: "zane"
spec:
replicas: 2
solrImage:
tag: 8.1.1
Confirmed pods are running prior.
etcd-operator-7cc54dcd69-xwqrd 1/1 Running 0 3h
solr-operator-5fc564b9b7-7j4dt 1/1 Running 0 2m
zk-operator-54fdfdc6bb-tkn8s 1/1 Running 0 43m
Error in solr-operator log:
{"level":"info","ts":1564610024.9087832,"logger":"entrypoint","msg":"solr-operator Version: 0.2.1"}
{"level":"info","ts":1564610024.9160395,"logger":"entrypoint","msg":"solr-operator Git SHA: "}
{"level":"info","ts":1564610024.9162643,"logger":"entrypoint","msg":"Go Version: go1.12.5"}
{"level":"info","ts":1564610024.91643,"logger":"entrypoint","msg":"Go OS/Arch: linux / amd64"}
{"level":"info","ts":1564610024.9165854,"logger":"entrypoint","msg":"setting up client for manager"}
{"level":"info","ts":1564610024.9170692,"logger":"entrypoint","msg":"setting up manager"}
{"level":"info","ts":1564610025.3872402,"logger":"entrypoint","msg":"Registering Components."}
{"level":"info","ts":1564610025.3876698,"logger":"entrypoint","msg":"setting up scheme"}
{"level":"info","ts":1564610025.3883626,"logger":"entrypoint","msg":"Setting up controller"}
{"level":"info","ts":1564610025.3889415,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"solrcloud-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1564610025.389596,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"solrcloud-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1564610025.3900578,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"solrcloud-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1564610025.3905382,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"solrcloud-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1564610025.390992,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"solrcloud-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1564610025.3914642,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"solrcloud-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1564610025.391863,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"solrcloud-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1564610025.392333,"logger":"entrypoint","msg":"setting up webhooks"}
{"level":"info","ts":1564610025.3924916,"logger":"entrypoint","msg":"Starting the Cmd."}
{"level":"info","ts":1564610026.4861755,"logger":"kubebuilder.controller","msg":"Starting Controller","controller":"solrcloud-controller"}
{"level":"info","ts":1564610026.5874474,"logger":"kubebuilder.controller","msg":"Starting workers","controller":"solrcloud-controller","worker count":1}
{"level":"info","ts":1564610026.5890381,"logger":"controller","msg":"Setting default settings for solr-cloud"}
{"level":"error","ts":1564610026.6012979,"logger":"kubebuilder.controller","msg":"Reconciler error","controller":"solrcloud-controller","request":"default/zane","error":"SolrCloud.solr.bloomberg.com \"zane\" is invalid: []: Invalid value: map[string]interface {}{\"kind\":\"SolrCloud\", \"apiVersion\":\"solr.bloomberg.com/v1beta1\", \"metadata\":map[string]interface {}{\"annotations\":map[string]interface {}{\"kubectl.kubernetes.io/last-applied-configuration\":\"{\\\"apiVersion\\\":\\\"solr.bloomberg.com/v1beta1\\\",\\\"kind\\\":\\\"SolrCloud\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"name\\\":\\\"zane\\\",\\\"namespace\\\":\\\"default\\\"},\\\"spec\\\":{\\\"replicas\\\":2,\\\"solrImage\\\":{\\\"tag\\\":\\\"8.1.1\\\"}}}\\n\"}, \"namespace\":\"default\", \"selfLink\":\"/apis/solr.bloomberg.com/v1beta1/namespaces/default/solrclouds/zane\", \"creationTimestamp\":\"2019-07-31T21:40:10Z\", \"generation\":1, \"name\":\"zane\", \"uid\":\"c55a94c1-b3db-11e9-a110-025000000001\", \"clusterName\":\"\", \"resourceVersion\":\"153254\"}, \"spec\":map[string]interface {}{\"solrImage\":map[string]interface {}{\"repository\":\"library/solr\", \"tag\":\"8.1.1\", \"pullPolicy\":\"IfNotPresent\"}, \"busyBoxImage\":map[string]interface {}{\"pullPolicy\":\"IfNotPresent\", \"repository\":\"library/busybox\", \"tag\":\"1.28.0-glibc\"}, \"replicas\":2, \"zookeeperRef\":map[string]interface {}{\"provided\":map[string]interface {}{\"zookeeper\":map[string]interface {}{\"replicas\":3, \"image\":map[string]interface {}{\"repository\":\"emccorp/zookeeper\", \"tag\":\"3.5.4-beta-operator\", \"pullPolicy\":\"IfNotPresent\"}, \"persistentVolumeClaimSpec\":map[string]interface {}{\"resources\":map[string]interface {}{\"requests\":map[string]interface {}{\"storage\":\"5Gi\"}}, \"dataSource\":interface {}(nil), \"accessModes\":[]interface {}{\"ReadWriteOnce\"}}}, \"zetcd\":interface {}(nil)}}}, \"status\":map[string]interface {}{\"replicas\":0, \"readyReplicas\":0, \"version\":\"\", \"internalCommonAddress\":\"\", \"zookeeperConnectionInfo\":map[string]interface {}{\"internalConnectionString\":\"\", \"chroot\":\"\"}, \"solrNodes\":interface {}(nil)}}: validation failure list:\nstatus.solrNodes in body must be of type array: \"null\"","stacktrace":"github.com/bloomberg/solr-operator/vendor/github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/bloomberg/solr-operator/vendor/github.com/go-logr/zapr/zapr.go:128\ngithub.com/bloomberg/solr-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/bloomberg/solr-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:217\ngithub.com/bloomberg/solr-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/src/github.com/bloomberg/solr-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\ngithub.com/bloomberg/solr-operator/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/bloomberg/solr-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\ngithub.com/bloomberg/solr-operator/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/bloomberg/solr-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\ngithub.com/bloomberg/solr-operator/vendor/k8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/bloomberg/solr-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}
{"level":"info","ts":1564610027.5688612,"logger":"controller","msg":"Setting default settings for solr-cloud"}
{"level":"error","ts":1564610027.5781262,"logger":"kubebuilder.controller","msg":"Reconciler error","controller":"solrcloud-controller","request":"default/zane","error":"SolrCloud.solr.bloomberg.com \"zane\" is invalid: []: Invalid value: map[string]interface {}{\"status\":map[string]interface {}{\"replicas\":0, \"readyReplicas\":0, \"version\":\"\", \"internalCommonAddress\":\"\", \"zookeeperConnectionInfo\":map[string]interface {}{\"internalConnectionString\":\"\", \"chroot\":\"\"}, \"solrNodes\":interface {}(nil)}, \"kind\":\"SolrCloud\", \"apiVersion\":\"solr.bloomberg.com/v1beta1\", \"metadata\":map[string]interface {}{\"selfLink\":\"/apis/solr.bloomberg.com/v1beta1/namespaces/default/solrclouds/zane\", \"creationTimestamp\":\"2019-07-31T21:40:10Z\", \"name\":\"zane\", \"clusterName\":\"\", \"uid\":\"c55a94c1-b3db-11e9-a110-025000000001\", \"resourceVersion\":\"153254\", \"generation\":1, \"annotations\":map[string]interface {}{\"kubectl.kubernetes.io/last-applied-configuration\":\"{\\\"apiVersion\\\":\\\"solr.bloomberg.com/v1beta1\\\",\\\"kind\\\":\\\"SolrCloud\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"name\\\":\\\"zane\\\",\\\"namespace\\\":\\\"default\\\"},\\\"spec\\\":{\\\"replicas\\\":2,\\\"solrImage\\\":{\\\"tag\\\":\\\"8.1.1\\\"}}}\\n\"}, \"namespace\":\"default\"}, \"spec\":map[string]interface {}{\"solrImage\":map[string]interface {}{\"repository\":\"library/solr\", \"tag\":\"8.1.1\", \"pullPolicy\":\"IfNotPresent\"}, \"busyBoxImage\":map[string]interface {}{\"repository\":\"library/busybox\", \"tag\":\"1.28.0-glibc\", \"pullPolicy\":\"IfNotPresent\"}, \"replicas\":2, \"zookeeperRef\":map[string]interface {}{\"provided\":map[string]interface {}{\"zetcd\":interface {}(nil), \"zookeeper\":map[string]interface {}{\"persistentVolumeClaimSpec\":map[string]interface {}{\"dataSource\":interface {}(nil), \"accessModes\":[]interface {}{\"ReadWriteOnce\"}, \"resources\":map[string]interface {}{\"requests\":map[string]interface {}{\"storage\":\"5Gi\"}}}, \"replicas\":3, \"image\":map[string]interface {}{\"pullPolicy\":\"IfNotPresent\", \"repository\":\"emccorp/zookeeper\", \"tag\":\"3.5.4-beta-operator\"}}}}}}: validation failure list:\nstatus.solrNodes in body must be of type array: \"null\"","stacktrace":"github.com/bloomberg/solr-operator/vendor/github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/bloomberg/solr-operator/vendor/github.com/go-logr/zapr/zapr.go:128\ngithub.com/bloomberg/solr-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/bloomberg/solr-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:217\ngithub.com/bloomberg/solr-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/src/github.com/bloomberg/solr-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\ngithub.com/bloomberg/solr-operator/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/bloomberg/solr-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\ngithub.com/bloomberg/solr-operator/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/bloomberg/solr-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\ngithub.com/bloomberg/solr-operator/vendor/k8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/bloomberg/solr-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}
{"level":"info","ts":1564610028.6535652,"logger":"controller","msg":"Setting default settings for solr-cloud"}
{"level":"error","ts":1564610028.6598723,"logger":"kubebuilder.controller","msg":"Reconciler error","controller":"solrcloud-controller","request":"default/zane","error":"SolrCloud.solr.bloomberg.com \"zane\" is invalid: []: Invalid value: map[string]interface {}{\"spec\":map[string]interface {}{\"replicas\":2, \"zookeeperRef\":map[string]interface {}{\"provided\":map[string]interface {}{\"zookeeper\":map[string]interface {}{\"image\":map[string]interface {}{\"repository\":\"emccorp/zookeeper\", \"tag\":\"3.5.4-beta-operator\", \"pullPolicy\":\"IfNotPresent\"}, \"persistentVolumeClaimSpec\":map[string]interface {}{\"accessModes\":[]interface {}{\"ReadWriteOnce\"}, \"resources\":map[string]interface {}{\"requests\":map[string]interface {}{\"storage\":\"5Gi\"}}, \"dataSource\":interface {}(nil)}, \"replicas\":3}, \"zetcd\":interface {}(nil)}}, \"solrImage\":map[string]interface {}{\"repository\":\"library/solr\", \"tag\":\"8.1.1\", \"pullPolicy\":\"IfNotPresent\"}, \"busyBoxImage\":map[string]interface {}{\"tag\":\"1.28.0-glibc\", \"pullPolicy\":\"IfNotPresent\", \"repository\":\"library/busybox\"}}, \"status\":map[string]interface {}{\"internalCommonAddress\":\"\", \"zookeeperConnectionInfo\":map[string]interface {}{\"internalConnectionString\":\"\", \"chroot\":\"\"}, \"solrNodes\":interface {}(nil), \"replicas\":0, \"readyReplicas\":0, \"version\":\"\"}, \"kind\":\"SolrCloud\", \"apiVersion\":\"solr.bloomberg.com/v1beta1\", \"metadata\":map[string]interface {}{\"namespace\":\"default\", \"uid\":\"c55a94c1-b3db-11e9-a110-025000000001\", \"creationTimestamp\":\"2019-07-31T21:40:10Z\", \"generation\":1, \"annotations\":map[string]interface {}{\"kubectl.kubernetes.io/last-applied-configuration\":\"{\\\"apiVersion\\\":\\\"solr.bloomberg.com/v1beta1\\\",\\\"kind\\\":\\\"SolrCloud\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"name\\\":\\\"zane\\\",\\\"namespace\\\":\\\"default\\\"},\\\"spec\\\":{\\\"replicas\\\":2,\\\"solrImage\\\":{\\\"tag\\\":\\\"8.1.1\\\"}}}\\n\"}, \"name\":\"zane\", \"clusterName\":\"\", \"selfLink\":\"/apis/solr.bloomberg.com/v1beta1/namespaces/default/solrclouds/zane\", \"resourceVersion\":\"153254\"}}: validation failure list:\nstatus.solrNodes in body must be of type array: \"null\"","stacktrace":"github.com/bloomberg/solr-operator/vendor/github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/bloomberg/solr-operator/vendor/github.com/go-logr/zapr/zapr.go:128\ngithub.com/bloomberg/solr-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/bloomberg/solr-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:217\ngithub.com/bloomberg/solr-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/src/github.com/bloomberg/solr-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\ngithub.com/bloomberg/solr-operator/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/bloomberg/solr-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\ngithub.com/bloomberg/solr-operator/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/bloomberg/solr-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\ngithub.com/bloomberg/solr-operator/vendor/k8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/bloomberg/solr-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}
{"level":"info","ts":1564610029.6663337,"logger":"controller","msg":"Setting default settings for solr-cloud"}
{"level":"error","ts":1564610029.754057,"logger":"kubebuilder.controller","msg":"Reconciler error","controller":"solrcloud-controller","request":"default/zane","error":"SolrCloud.solr.bloomberg.com \"zane\" is invalid: []: Invalid value: map[string]interface {}{\"kind\":\"SolrCloud\", \"apiVersion\":\"solr.bloomberg.com/v1beta1\", \"metadata\":map[string]interface {}{\"namespace\":\"default\", \"uid\":\"c55a94c1-b3db-11e9-a110-025000000001\", \"resourceVersion\":\"153254\", \"clusterName\":\"\", \"generation\":1, \"creationTimestamp\":\"2019-07-31T21:40:10Z\", \"annotations\":map[string]interface {}{\"kubectl.kubernetes.io/last-applied-configuration\":\"{\\\"apiVersion\\\":\\\"solr.bloomberg.com/v1beta1\\\",\\\"kind\\\":\\\"SolrCloud\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"name\\\":\\\"zane\\\",\\\"namespace\\\":\\\"default\\\"},\\\"spec\\\":{\\\"replicas\\\":2,\\\"solrImage\\\":{\\\"tag\\\":\\\"8.1.1\\\"}}}\\n\"}, \"name\":\"zane\", \"selfLink\":\"/apis/solr.bloomberg.com/v1beta1/namespaces/default/solrclouds/zane\"}, \"spec\":map[string]interface {}{\"replicas\":2, \"zookeeperRef\":map[string]interface {}{\"provided\":map[string]interface {}{\"zookeeper\":map[string]interface {}{\"image\":map[string]interface {}{\"repository\":\"emccorp/zookeeper\", \"tag\":\"3.5.4-beta-operator\", \"pullPolicy\":\"IfNotPresent\"}, \"persistentVolumeClaimSpec\":map[string]interface {}{\"accessModes\":[]interface {}{\"ReadWriteOnce\"}, \"resources\":map[string]interface {}{\"requests\":map[string]interface {}{\"storage\":\"5Gi\"}}, \"dataSource\":interface {}(nil)}, \"replicas\":3}, \"zetcd\":interface {}(nil)}}, \"solrImage\":map[string]interface {}{\"tag\":\"8.1.1\", \"pullPolicy\":\"IfNotPresent\", \"repository\":\"library/solr\"}, \"busyBoxImage\":map[string]interface {}{\"repository\":\"library/busybox\", \"tag\":\"1.28.0-glibc\", \"pullPolicy\":\"IfNotPresent\"}}, \"status\":map[string]interface {}{\"zookeeperConnectionInfo\":map[string]interface {}{\"internalConnectionString\":\"\", \"chroot\":\"\"}, \"solrNodes\":interface {}(nil), \"replicas\":0, \"readyReplicas\":0, \"version\":\"\", \"internalCommonAddress\":\"\"}}: validation failure list:\nstatus.solrNodes in body must be of type array: \"null\"","stacktrace":"github.com/bloomberg/solr-operator/vendor/github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/bloomberg/solr-operator/vendor/github.com/go-logr/zapr/zapr.go:128\ngithub.com/bloomberg/solr-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/bloomberg/solr-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:217\ngithub.com/bloomberg/solr-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/src/github.com/bloomberg/solr-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\ngithub.com/bloomberg/solr-operator/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/bloomberg/solr-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\ngithub.com/bloomberg/solr-operator/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/bloomberg/solr-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\ngithub.com/bloomberg/solr-operator/vendor/k8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/bloomberg/solr-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}
Environment (please complete the following information):
Is your feature request related to a problem? Please describe.
Ability to tune SOLR_OPTS and GC_TUNE, related docs
https://lucene.apache.org/solr/guide/8_1/taking-solr-to-production.html#memory-and-gc-settings
https://lucene.apache.org/solr/guide/8_1/taking-solr-to-production.html#override-settings-in-solrconfig-xml
Describe the solution you'd like
When going to production these values require tuning typically. Allow for the Solr-Operator SolrCloud CRD the ability to tune values passed to these ENV variables for Solr Cloud process to use.
Im starting work on this now. If you have any feedback or suggestions @HoustonPutman let me know! should be straightforward though.
Specifically, I would like to see the Phase 5 Auto Pilot maturity model achieved.
https://operatorhub.io
https://raw.githubusercontent.com/operator-framework/operator-sdk/master/doc/images/operator-capability-level.png
Is your feature request related to a problem? Please describe.
Presently, we have to use the public Docker repository images. If you want to switch to using a private Docker registry to pull Solr images from this will not work in solr-operator.
Describe the solution you'd like
Update the stateful set to optionally include the ability to specify a K8s secret (Docker registry secret, create outside of solr-operator) to be used to pull Solr images from a private Docker registry. This would allow us to bake custom Jar's into a Solr image.
I wanted to file and issue for any discussion needed. I will start testing and working on this in a PR as this is something our team needs in order to support baking in custom Jars for Solr.
Is your feature request related to a problem? Please describe.
We would like to apply kube2iam roles to pods in solrcloud statefulset. The role will allow access to s3 for backups.
Snippet of what is needed
spec:
template:
metadata:
annotations:
iam.amazonaws.com/role: role-arn
Describe the solution you'd like
The ability to add any spec/template/metadata annotation for developers to request
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.