Giter Club home page Giter Club logo

solrs's Introduction

solrs - async solr client for java/scala

Build Status Maven Central Join the chat at https://gitter.im/inoio/solrs

This is a java/scala solr client providing an interface like SolrJ, just asynchronously / non-blocking (built on top of async-http-client / netty).

Key Features

  • Async, non-blocking API to Solr on the JVM: supports CompletableFuture for Java, for Scala you can choose between Twitter's Future or the standard/SDK Future.
  • SolrCloud support
  • Optimized request routing (e.g. updates go to leaders, _route_ param is respected, replica.type is supported for shards.preference param)
  • Pluggable load balancing strategies, comes with a performance/statistics based load balancer
  • Support for retry policies in case of failures

Documentation

The documentation is available at https://inoio.github.io/solrs/

License

This software is licensed under the Apache 2 license, see LICENSE.txt.

solrs's People

Contributors

acehack avatar billyean avatar cponomaryov avatar gilnoh avatar magro avatar mahlingam avatar marcopriebe avatar migo avatar mkr avatar rodrigovedovato avatar sakulk avatar scala-steward avatar wcurrie avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

solrs's Issues

Enable 'raw' queries by overriding default responseparser per request

I am looking forward to integrate official solrs releases again hopefully in the near future. Since I had to support existing frameworks relying on XML and JSON responses, solrs official QueryResponse was not an option. In order to still use solrs clustering capabilities, I had to find a way to get arbitrary response formats from solrs. Please see PR #49
I am aware, that there is an alternative solution available: creating new client instances for every response format. The reason for not going with that, was the additional IO caused by every new client instance for keeping the cluster state up-to-date.

Throughput lower than expected

In my test environment I've tested Solrs client for maximum throughput (requests per second).
I got that my upper bound is nearly 1600 req/s.
As a comparison I ran plain Solrj client in the same scenario and I got 1500 req/s. Good.
Now comes the strange part of the story (at least for me).
In my tests, when I load balance requests against multiple instances of Solrj (running in the same JVM) throughput rises - for ex. with 3 solrj instances I get up to 3000 req/s.
On the contrary using multiple instances of Solrs client does not give any performance boost: total throughput is always up to 1600 req/s.

Are there some Solrs configurations I can try to increase throughput ?

Add dynamic load balancing strategy

There should be a load balancing strategy that observes the performance of solr servers and decides based on their current performance which one to choose. This should support / improve the following scenarios:

  • solrcloud spanning multiple data centers (with notable latency), so that there's an additional latency to some of the solr instances.
  • different kinds of queries: imagine 4 solr instances, and 2 kinds of queries, where e.g. query1 takes 10ms in avg and query2 takes 100ms in avg. The round robin load balancer would hand over the expensive queries to 2 solr instances and the cheap queries to the other 2 solr instances - quite uneven distribution.
  • a solr instance suddenly performs worse than before (e.g. due to GC, a newSearcher is installed and has negative effect on query performance, s.th. else happening on this host).

Looking at other load balancer algorithms, e.g. the description of loadbalancer algorithms of F5 sound interesting / useful as inspiration (especially dynamic round robin, observed and predictive).

Based on the scenarios to support the implemented lb strategy should have the following characteristics:

  • take solr server performance into account (via real-time server performance analysis)
  • react fast to changing solr server performance
  • take the kind of query into account (the easiest way is to ask the application to classify a query)

Add scala 3 support

Publish also for scala 3, maybe drop 2.12 support if this simplifies things.

See e.g.

Note, from https://docs.scala-lang.org/scala3/guides/migration/compatibility-classpath.html:

Disclaimer for library maintainers
Using the interoperability between Scala 2.13 and Scala 3 in a published library is generally not safe for your end-users.

Unless you know exactly what you are doing, it is discouraged to publish a Scala 3 library that depends on a Scala 2.13 library (the scala-library being excluded) or vice versa. The reason is to prevent library users from ending up with two conflicting versions foo_2.13 and foo_3 of the same foo library in their classpath, this problem being unsolvable in some cases.

Is there a best practice to use APIs

What's the best practice to use JavaAsyncSolrClient? I am planning to use it in my web app; so I want to know if it's thread safe or not. Currently we use SolrJ's CloudSolrClient which is thread-safe and we share a singleton instance of this for all incoming requests. Can we do same for JavaAsyncSolrClient?

Add support for hedged requests

From http://blog.acolyer.org/2015/01/15/the-tail-at-scale/:

Hedged requests: send the same requests to multiple servers, and use whatever response comes back first. To avoid doubling or tripling your computation load though, don’t send the hedging requests straight away:
defer sending a secondary request until the first request has been outstanding for more than the 95th-percentile expected latency for this class of requests. This approach limits the additional load to approximately 5% while substantially shortening the tail latency.

Serializable

Martin
Can you make library Serializable?
I am trying to use it in spark application but cannot use it due to unserializable?
Thanks

"SolrServerException: No solr server available." in version 1.3.x

Hello, I'm using SolrCLoud with Zookeeper and I'm initializing an AsyncSolrClient like this:

val serversSolr = new CloudSolrServers(
    zkHost = play.Play.application.configuration.getString("solr.zk"),
    defaultCollection = Some(play.Play.application.configuration.getString("solr.collection")))

val solr = AsyncSolrClient.Builder(RoundRobinLB(serversSolr)).build

The first time I execute a call like this:

...
solr.query(myquery)
...

I get the following exception:

org.apache.solr.client.solrj.SolrServerException: No solr server available.
    at io.ino.solrs.AsyncSolrClient.io$ino$solrs$AsyncSolrClient$$loadBalanceQuery(AsyncSolrClient.scala:199) ~[solrs_2.11-1.3.1.jar:1.3.1]
    at io.ino.solrs.AsyncSolrClient.query(AsyncSolrClient.scala:178) ~[solrs_2.11-1.3.1.jar:1.3.1]

If I execute the same query again, everything works fine. Apparently the problem doesn't happen if just after the creation of the AsyncSolrClient I call this:

serversSolr.setAsyncSolrClient(solr)

In version 1.2.0 and previous versions everything works fine.
Thanks

Allow multiple instances of FastestServerLB globally

Currently, in Solrs 3, when you build a FastestServerLB instance, it will call initJmx().

Calling initJmx() twice will cause an exception, because it will try to register the class MBean twice, so you can't create two instances of FastestServerLB globally.

Support for solr8.8.2

Can we test if it supports latest solr 8.8.2 and 8.9 version since there are some security vulnerabilities found in 8.6

Query not working with SolrCloud in standalone mode

When running SolrCloud in standalone mode

$ bin/solr start -cloud -e cloud

I get following error as result of every query:

WARN solrs.AsyncSolrClient: Query failed for server SolrServer(http://127.0.1.1:7574/solr/gettingstarted_shard1_replica2/, Enabled), not retrying. Exception was: io.ino.solrs.RemoteSolrException: Expected mime type [] but got [text/html].
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8"/>
<title>Error 404 Not Found</title>
</head>
<body><h2>HTTP ERROR 404</h2>
<p>Problem accessing /solr/gettingstarted_shard1_replica2//select. Reason:
<pre>    Not Found</pre></p><hr><i><small>Powered by Jetty://</small></i><hr/>

</body>
</html>

io.ino.solrs.RemoteSolrException: Expected mime type [] but got [text/html].
...

As you can see in query url there is a double '/' before 'select': /solr/gettingstarted_shard1_replica2//select .
In embedded mode Solr runs (by default) on Jetty and the double slash in the url gets it confused.
On the contrary if you deploy Solr webapp on Tomcat error disappears.

Provide Java-friendly API

From issue #8 / comment by @fsantagostinobietti:

I found it was quite easy to use solrs from Java.
Scala future is ok (now I'm working with java7), but probably with java8 CompletableFuture will be the best option.
I had just one problem with default values during class CloudSolrServers instantiation. My code was something like this (a bit ugly) :

Duration zkClientTimeout = CloudSolrServers.$lessinit$greater$default$2();    // default value 2nd parameter
Duration zkConnectTimeout = CloudSolrServers.$lessinit$greater$default$3();   // default value 3rd parameter
Duration clusterStateUpdateInterval = CloudSolrServers.$lessinit$greater$default$4(); // default value 4th parameter
Option<String> defaultCollection = Option.apply(null); // None value
Option<WarmupQueries> warmupQueries = Option.apply(null); // None value

CloudSolrServers servers = new CloudSolrServers( 
          zkQuorum, 
          zkClientTimeout, 
          zkConnectTimeout, 
          clusterStateUpdateInterval, 
          defaultCollection, 
          warmupQueries  );

Java-friendly constructors could be enough for me.

Release plan for 2.0 version? Current Maven (RC4) artifact's stableness?

(I am sorry that I am writing a question to the issue tracker --- I couldn't find the user board or user mailing list. Feel free to delete this issue and forward me to proper place for asking release related question.)

This is a wonderful library that fills our need. I've been testing SolrS to replace SolrJ synchronous queries on our backend servers, and it seems to work well.

Here's my questions.

  • The documentation says "use RC3", but RC3 has some limitations like missing methods (e.g. deletions, etc), and missing capabilities (e.g. query method as POST), compared to the source code of master branch.
  • On Maven central, there is RC4 artifact, which seems to be in line with current master branch. I guess I should use RC4.
  • We are about to use RC4 on a productive environment --- will there be any pitfall I should be careful, or to worry about? :-)

Thanks again for this great library.

Gil
OMQ GmbH

Shard Support

Please provide a shard support as it is very important feature of solr, in performance issue, while dealing with huge amount of data.

Migrate from travis-ci to github actions

Since Travis discontinued their offering with travis-ci.org (.com requires OSS maintainers to ask for credits via an email), the build should be migrated to github actions.

https://docs.github.com/en/actions/migrating-to-github-actions/migrating-from-travis-ci-to-github-actions

Note: publishing github-pages seems to work already without travis via sbt ghpagesPushSite: https://github.com/inoio/solrs/deployments/activity_log?environment=github-pages ("Deployed" linking to https://github.com/inoio/solrs/runs/4674248970?check_suite_focus=true)

java.lang.NoSuchMethodError: org.apache.solr.client.solrj.request.UpdateRequest.setCommitWithin

I get the following error trying to use 2_11 2.1.0.

java.lang.NoSuchMethodError: org.apache.solr.client.solrj.request.UpdateRequest.setCommitWithin(I)V
at io.ino.solrs.AsyncSolrClient.updateRequest(AsyncSolrClient.scala:242)
at io.ino.solrs.AsyncSolrClient.addDocs(AsyncSolrClient.scala:301)
at content.util.solr.SolrIngestor$$anonfun$ingest$1$$anonfun$apply$1.apply(SolrIngestor.scala:29)
at content.util.solr.SolrIngestor$$anonfun$ingest$1$$anonfun$apply$1.apply(SolrIngestor.scala:19)
at content.util.timer$.apply(timer.scala:6)
at content.util.solr.SolrIngestor$$anonfun$ingest$1.apply(SolrIngestor.scala:19)
at content.util.solr.SolrIngestor$$anonfun$ingest$1.apply(SolrIngestor.scala:18)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at content.util.solr.SolrIngestor$.ingest(SolrIngestor.scala:18)
at content.util.SparkRunner$$anonfun$processPartition$1.apply$mcV$sp(SparkRunner.scala:37)
at content.util.SparkRunner$$anonfun$processPartition$1.apply(SparkRunner.scala:33)
at content.util.SparkRunner$$anonfun$processPartition$1.apply(SparkRunner.scala:33)
at content.util.timer$.apply(timer.scala:6)
at content.util.SparkRunner.processPartition(SparkRunner.scala:33)
at content.parquet.ContentIngestionJob$$anonfun$reingest$1$$anonfun$apply$mcV$sp$1.apply(ContentIngestionJob.scala:17)
at content.parquet.ContentIngestionJob$$anonfun$reingest$1$$anonfun$apply$mcV$sp$1.apply(ContentIngestionJob.scala:16)
at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$28.apply(RDD.scala:935)
at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$28.apply(RDD.scala:935)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2101)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2101)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:121)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

Using query.process(client, collection) in Java

Hey
I'd like to use

final DirectJsonQueryRequest query = new DirectJsonQueryRequest()..setQuery("""
{
  "query" : "name:iPod"
}
""");
final QueryResponse response = query.process(solrClient, COLLECTION_NAME);

I wonder how to do this in solrs. I've found the execute method and more methods that may seem relevant, but they require SolrResponseFactory. I haven't found any Java way of passing a query object as a parameter to a method that will be equivalent to process. Is there such method already?

Query method POST, double sending all params

While testing SolrS to replace SolrJ, I've found a bug.
When the query method is selected as POST, all parameters passed to Solr (q, fq, ... or any manually set parameters) are duplicated. This is one such log output from Solr server.

2017-05-30 07:19:25.544 INFO (qtp472654579-12) [ x:question] o.a.s.c.S.Request [question] webapp=/solr path=/select params={q=frage&q=frage&fq=answers.isEmpty:false&fq=answers.expirationDate:[NOW+TO+*]&fq=answers.startDate:[*+TO+NOW]&fq=answers.visibility:(PUBLIC)&fq=language:DE&fq=tenant.id:1&fq=answers.isEmpty:false&fq=answers.expirationDate:[NOW+TO+*]&fq=answers.startDate:[*+TO+NOW]&fq=answers.visibility:(PUBLIC)&fq=language:DE&fq=tenant.id:1&wt=javabin&wt=javabin&version=2&version=2} hits=2 status=0 QTime=8

Naturally on SolrJ, using POST does not make this type of parameter duplication.
I have not traced the code to check why this occurs, but first thing that hits my mind is that the arguments are maybe passed twice -- once on URL header (just like GET case), and once more on the POST body ....

Note that, duplications are normally not very visible, unless ...

  • Query repeated twice, and can make different score.
    -- On a server with http header size limited, this can make different ranks (e.g. part of query duplicated and gets more score, part lost are only on POST part)
  • Special params such as MoreLikeThis query's body.stream --- server exception occurs if they detect duplicated param.

DefaultAsyncHttpClient throwing java.lang.NumberFormatException error with Play framework

I have a play scala app in which I am trying to use solrs.

  1. Scala version - "2.12.3"
  2. Play WS AHC version - "com.typesafe.play" %% "play-ahc-ws" % "2.6.9"
  3. Solrs version - "io.ino" %% "solrs" % "2.1.0"

Build.sbt dependencies

  "com.google.inject" % "guice" % "4.1.0",
  "com.typesafe.akka" %% "akka-http" % "10.0.5",
  "ch.qos.logback" % "logback-classic" % "1.1.7",
  "com.typesafe.scala-logging" %% "scala-logging" % "3.5.0",
  "org.mongodb" %% "casbah" % "3.1.1",
  "com.github.salat" %% "salat" % "1.11.2",
  "com.typesafe.play" %% "play-ws" % "2.7.0-M1",
  "com.typesafe.play" %% "play-json" % "2.6.8",
  "com.google.guava" % "guava" % "21.0",
  "com.typesafe.play" %% "play-ahc-ws" % "2.7.0-M1",
  "org.spire-math" %% "spire" % "0.13.0",
  "io.ino" %% "solrs" % "2.1.0"

Usage -

  import io.ino.solrs._
  import io.ino.solrs.future.ScalaFutureFactory.Implicit
  val servers = new CloudSolrServers(uri)
  val asyncHttpClient = new DefaultAsyncHttpClient()
  val solrClient = new MoolSolrClient(AsyncSolrClient.Builder(RoundRobinLB(servers))
      .withHttpClient(asyncHttpClient).build)

I have tried the workarounds specified here, but it still gives me the same error specified below -

playframework/play-ws#87

Caused by: java.lang.NumberFormatException: null
at java.lang.Integer.parseInt(Integer.java:542)
	at java.lang.Integer.parseInt(Integer.java:615)
	at org.asynchttpclient.config.AsyncHttpClientConfigHelper$Config.getInt(AsyncHttpClientConfigHelper.java:109)
	at org.asynchttpclient.config.AsyncHttpClientConfigDefaults.defaultHttpClientCodecInitialBufferSize(AsyncHttpClientConfigDefaults.java:240)
	at org.asynchttpclient.DefaultAsyncHttpClientConfig$Builder.<init>(DefaultAsyncHttpClientConfig.java:712)
	at org.asynchttpclient.DefaultAsyncHttpClient.<init>(DefaultAsyncHttpClient.java:69)
	at org.asynchttpclient.Dsl.asyncHttpClient(Dsl.java:28)

Is there a workaround, like if only through simple modifications, solrs could support Play version of DefaultAsyncHtpClient(as play uses the same library internally)?

Load balancer is querying the wrong collection

If I create a load balancer with a defined collection name for its test queries, it will connect to another collection anyway

Tuple2<String, SolrQuery> col1TestQuery = new Tuple2<>("member", new SolrQuery("assoc_type:lb_test").setRows(0));
Function1<SolrServer, Tuple2<String, SolrQuery>> collectionAndTestQuery = server -> col1TestQuery;

Solr prints the following errors constantly when I use the load balancer:

Details

2024-01-24 16:27:50.702 ERROR (qtp1930903395-395007) [c:chat s:shard1 r:core_node14 x:chat_shard1_replica_n13 t:10.2.0.12-1369324] o.a.s.h.RequestHandlerBase Client exception => org.apache.solr.common.SolrException: undefined field assoc_type
	at org.apache.solr.schema.IndexSchema.getDynamicFieldType(IndexSchema.java:1478)
org.apache.solr.common.SolrException: undefined field assoc_type
	at org.apache.solr.schema.IndexSchema.getDynamicFieldType(IndexSchema.java:1478) ~[?:?]
	at org.apache.solr.schema.IndexSchema$SolrQueryAnalyzer.getWrappedAnalyzer(IndexSchema.java:500) ~[?:?]
	at org.apache.lucene.analysis.DelegatingAnalyzerWrapper$DelegatingReuseStrategy.getReusableComponents(DelegatingAnalyzerWrapper.java:83) ~[?:?]
	at org.apache.lucene.analysis.Analyzer.tokenStream(Analyzer.java:184) ~[?:?]
	at org.apache.lucene.util.QueryBuilder.createFieldQuery(QueryBuilder.java:256) ~[?:?]
	at org.apache.solr.parser.SolrQueryParserBase.newFieldQuery(SolrQueryParserBase.java:527) ~[?:?]
	at org.apache.solr.parser.QueryParser.newFieldQuery(QueryParser.java:68) ~[?:?]
	at org.apache.solr.parser.SolrQueryParserBase.getFieldQuery(SolrQueryParserBase.java:1140) ~[?:?]
	at org.apache.solr.parser.SolrQueryParserBase.handleBareTokenQuery(SolrQueryParserBase.java:856) ~[?:?]
	at org.apache.solr.parser.QueryParser.Term(QueryParser.java:454) ~[?:?]
	at org.apache.solr.parser.QueryParser.Clause(QueryParser.java:293) ~[?:?]
	at org.apache.solr.parser.QueryParser.Query(QueryParser.java:173) ~[?:?]
	at org.apache.solr.parser.QueryParser.TopLevelQuery(QueryParser.java:143) ~[?:?]
	at org.apache.solr.parser.SolrQueryParserBase.parse(SolrQueryParserBase.java:274) ~[?:?]
	at org.apache.solr.search.LuceneQParser.parse(LuceneQParser.java:51) ~[?:?]
	at org.apache.solr.search.QParser.getQuery(QParser.java:188) ~[?:?]
	at org.apache.solr.handler.component.QueryComponent.prepare(QueryComponent.java:172) ~[?:?]
	at org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:431) ~[?:?]
	at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:226) ~[?:?]
	at org.apache.solr.core.SolrCore.execute(SolrCore.java:2901) ~[?:?]
	at org.apache.solr.servlet.HttpSolrCall.executeCoreRequest(HttpSolrCall.java:875) ~[?:?]
	at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:561) ~[?:?]
	at org.apache.solr.servlet.SolrDispatchFilter.dispatch(SolrDispatchFilter.java:262) ~[?:?]
	at org.apache.solr.servlet.SolrDispatchFilter.lambda$doFilter$0(SolrDispatchFilter.java:219) ~[?:?]
	at org.apache.solr.servlet.ServletUtils.traceHttpRequestExecution2(ServletUtils.java:246) ~[?:?]
	at org.apache.solr.servlet.ServletUtils.rateLimitRequest(ServletUtils.java:215) ~[?:?]
	at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:213) ~[?:?]
	at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:195) ~[?:?]
	at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:210) ~[jetty-servlet-10.0.17.jar:10.0.17]
	at org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1635) ~[jetty-servlet-10.0.17.jar:10.0.17]
	at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:527) ~[jetty-servlet-10.0.17.jar:10.0.17]
	at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:131) ~[jetty-server-10.0.17.jar:10.0.17]
	at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:598) ~[jetty-security-10.0.17.jar:10.0.17]
	at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:122) ~[jetty-server-10.0.17.jar:10.0.17]
	at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:223) ~[jetty-server-10.0.17.jar:10.0.17]
	at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1570) ~[jetty-server-10.0.17.jar:10.0.17]
	at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:221) ~[jetty-server-10.0.17.jar:10.0.17]
	at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1384) ~[jetty-server-10.0.17.jar:10.0.17]
	at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:176) ~[jetty-server-10.0.17.jar:10.0.17]
	at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:484) ~[jetty-servlet-10.0.17.jar:10.0.17]
	at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1543) ~[jetty-server-10.0.17.jar:10.0.17]
	at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:174) ~[jetty-server-10.0.17.jar:10.0.17]
	at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1306) ~[jetty-server-10.0.17.jar:10.0.17]
	at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:129) ~[jetty-server-10.0.17.jar:10.0.17]
	at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:149) ~[jetty-server-10.0.17.jar:10.0.17]
	at org.eclipse.jetty.server.handler.InetAccessHandler.handle(InetAccessHandler.java:228) ~[jetty-server-10.0.17.jar:10.0.17]
	at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:141) ~[jetty-server-10.0.17.jar:10.0.17]
	at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:122) ~[jetty-server-10.0.17.jar:10.0.17]
	at org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:301) ~[jetty-rewrite-10.0.17.jar:10.0.17]
	at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:122) ~[jetty-server-10.0.17.jar:10.0.17]
	at org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:822) ~[jetty-server-10.0.17.jar:10.0.17]
	at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:122) ~[jetty-server-10.0.17.jar:10.0.17]
	at org.eclipse.jetty.server.Server.handle(Server.java:563) ~[jetty-server-10.0.17.jar:10.0.17]
	at org.eclipse.jetty.server.HttpChannel$RequestDispatchable.dispatch(HttpChannel.java:1598) ~[jetty-server-10.0.17.jar:10.0.17]
	at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:753) ~[jetty-server-10.0.17.jar:10.0.17]
	at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:501) ~[jetty-server-10.0.17.jar:10.0.17]
	at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:287) ~[jetty-server-10.0.17.jar:10.0.17]
	at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:314) ~[jetty-io-10.0.17.jar:10.0.17]
	at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:100) ~[jetty-io-10.0.17.jar:10.0.17]
	at org.eclipse.jetty.io.SelectableChannelEndPoint$1.run(SelectableChannelEndPoint.java:53) ~[jetty-io-10.0.17.jar:10.0.17]
	at org.eclipse.jetty.util.thread.strategy.AdaptiveExecutionStrategy.runTask(AdaptiveExecutionStrategy.java:421) ~[jetty-util-10.0.17.jar:10.0.17]
	at org.eclipse.jetty.util.thread.strategy.AdaptiveExecutionStrategy.consumeTask(AdaptiveExecutionStrategy.java:390) ~[jetty-util-10.0.17.jar:10.0.17]
	at org.eclipse.jetty.util.thread.strategy.AdaptiveExecutionStrategy.tryProduce(AdaptiveExecutionStrategy.java:277) ~[jetty-util-10.0.17.jar:10.0.17]
	at org.eclipse.jetty.util.thread.strategy.AdaptiveExecutionStrategy.run(AdaptiveExecutionStrategy.java:199) ~[jetty-util-10.0.17.jar:10.0.17]
	at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:411) ~[jetty-util-10.0.17.jar:10.0.17]
	at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:969) ~[jetty-util-10.0.17.jar:10.0.17]
	at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.doRunJob(QueuedThreadPool.java:1194) ~[jetty-util-10.0.17.jar:10.0.17]
	at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1149) ~[jetty-util-10.0.17.jar:10.0.17]
	at java.lang.Thread.run(Thread.java:1583) [?:?]

RoundRobinLB idx is not thread safe

We're using RoundRobinLB in a hight parallelized application, we found out our solr cloud were hit on unbalanced requests. We found RoundRobinLB class is using a local idx.(is that possible to change it to AtomicInteger here)

private var idx = 0

=>

private final val idx = new AtomicInteger(0)
val next = idx.incrementAndGet()
Success(preferred(next % preferred.length))

CloudSolrClient

How do I get the same feature as the following from CloudSolrClient?

/**
 * Tells {@link Builder} that created clients should send direct updates to shard leaders only.
 *
 * UpdateRequests whose leaders cannot be found will "fail fast" on the client side with a {@link SolrException}
 */
public Builder sendDirectUpdatesToShardLeadersOnly() {
  directUpdatesToLeadersOnly = true;
  return this;
}

/**
 * Tells {@link Builder} that created clients can send updates to any shard replica (shard leaders and non-leaders).
 *
 * Shard leaders are still preferred, but the created clients will fallback to using other replicas if a leader
 * cannot be found.
 */
public Builder sendDirectUpdatesToAnyShardReplica() {
  directUpdatesToLeadersOnly = false;
  return this;
}

scala-xml version conflict

This was raised via gitter, https://gitter.im/inoio/solrs?at=61d5ad25526fb77b3164391f:

I am using solrs with some of frameworks (lagom/akka/play), but after newest PR i have binary compatibility confict because of scala xml (framework use 1.x.x which is not binary compatible with new 2.x.x). I know that i have small chances to ask for frameworks to do a update to newest version of this xml library ( i know that lagom now is mainly in support mode like security fixes).

I have a question if it is posible to have solrs release with downgraded scala-xml?

The issue here shall track this compatibility issue. One option to deal with this issue should be to downgrade/fix scala-xml to the version need by a framework like lagom.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.