Giter Club home page Giter Club logo

scala-influxdb-client's People

Contributors

andrioni avatar argyakrivos avatar da-liii avatar kailuowang avatar michaelzg avatar paulgoldbaum avatar sbalea avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

scala-influxdb-client's Issues

Support for async-http-client 2.0.0/Netty 4

In order to use scala-influxdb-client on a project I'm working on, I had to bump async-http-client to the new 2.0.0-RC15 (which uses Netty 4 and has some minor API changes). Would you be interested in a PR with these changes?

Deriving types for QueryResult Records (and the Option of)

As the query results are Any, how do you suggest deriving types for the values?

For example:

val temperature: Double = record("temperature").asInstanceOf[BigDecimal].toDouble

but that seems nasty. Do you retain the type from the JSON wire format? Any fancy implicit encoders available?

Similarly, how do you advice handling values which may be not there, e.g.

val maybeTemperature: Option[Double] = record("maybeThere") ...

Would be nice, but as it stands I can get NoSuchElementException when I request something that isn't there.

I had a poke around but couldn't find anything obvious.

Thanks in advance

Is their a way to use chunking from the library on query level?

  1. I want to set chunking for the responses using InfluxDB client? is their any way I can do that with the library?
  2. I see from Influx docs that chunking can be enable on database level. I want to do chunking on per request i.e, it should be part of user call, whether he wants to enable the query as chunked response or not. Can I do that with the library as of now?

Unable to submit data to InfluxDB

Error:
Exception in thread "main" java.lang.NoSuchMethodError: org.jboss.netty.handler.codec.http.HttpRequest.headers()Lorg/jboss/netty/handler/codec/http/HttpHeaders;
at com.ning.http.client.providers.netty.request.NettyRequestFactory.newNettyRequest(NettyRequestFactory.java:178)
at com.ning.http.client.providers.netty.request.NettyRequestSender.newNettyRequestAndResponseFuture(NettyRequestSender.java:181)
at com.ning.http.client.providers.netty.request.NettyRequestSender.sendRequestWithCertainForceConnect(NettyRequestSender.java:135)
at com.ning.http.client.providers.netty.request.NettyRequestSender.sendRequest(NettyRequestSender.java:117)
at com.ning.http.client.providers.netty.NettyAsyncHttpProvider.execute(NettyAsyncHttpProvider.java:87)
at com.ning.http.client.AsyncHttpClient.executeRequest(AsyncHttpClient.java:506)
at com.ning.http.client.AsyncHttpClient$BoundRequestBuilder.execute(AsyncHttpClient.java:225)
at com.paulgoldbaum.influxdbclient.HttpClient.makeRequest(HttpClient.scala:49)
at com.paulgoldbaum.influxdbclient.HttpClient.post(HttpClient.scala:41)
at com.paulgoldbaum.influxdbclient.Database.executeWrite(Database.scala:34)
at com.paulgoldbaum.influxdbclient.Database.bulkWrite(Database.scala:28)
at com.ebay.gps.monitoring.DataPoster.postToInflux(DataPoster.scala:28)
at com.ebay.gps.monitoring.DataPoster.postData(DataPoster.scala:22)
at com.ebay.gps.monitoring.PostDataRunner$.postData(PostDataRunner.scala:22)
at com.ebay.gps.monitoring.PostDataRunner$.main(PostDataRunner.scala:31)
at com.ebay.gps.monitoring.PostDataRunner.main(PostDataRunner.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.main(RunJar.java:212)

<scala.version>2.10.4</scala.version>
<scala.tools.version>2.10</scala.tools.version>

com.paulgoldbaum scala-influxdb-client_2.10 0.4.3

Works fine on Eclipse but fails when run from Command Line.

My FAT JAR contains

$ jar -tvf ingestion-service/target/ep-job.jar
0 Mon Mar 07 16:46:38 PST 2016 META-INF/
104 Mon Mar 07 16:46:36 PST 2016 META-INF/MANIFEST.MF
0 Mon Mar 07 16:46:38 PST 2016 lib/
3461506 Wed Jun 17 15:48:18 PDT 2015 lib/scoobi_2.10-0.9.2.jar
14445780 Wed Jun 17 15:34:44 PDT 2015 lib/scala-compiler-2.10.4.jar
7126372 Wed Jun 17 15:34:10 PDT 2015 lib/scala-library-2.10.4.jar
3206180 Thu Jan 21 10:52:48 PST 2016 lib/scala-reflect-2.10.6.jar
303139 Wed Jun 17 15:35:18 PDT 2015 lib/avro-1.7.4.jar
29555 Wed Jun 17 15:34:40 PDT 2015 lib/paranamer-2.3.jar
538830 Wed Jun 17 15:48:14 PDT 2015 lib/xstream-1.4.8.jar
7188 Wed Jun 17 15:48:12 PDT 2015 lib/xmlpull-1.1.3.1.jar
24956 Wed Jun 17 15:48:12 PDT 2015 lib/xpp3_min-1.1.4c.jar
644148 Wed Jun 17 15:48:14 PDT 2015 lib/javassist-3.12.1.GA.jar
809342 Wed Jun 17 15:48:16 PDT 2015 lib/kiama_2.10-1.6.0.jar
1648200 Wed Jun 17 15:48:22 PDT 2015 lib/guava-11.0.2.jar
19353 Wed Jun 17 15:48:14 PDT 2015 lib/dsinfo_2.10-0.3.0.jar
74708 Wed Jun 17 15:48:14 PDT 2015 lib/dsprofile_2.10-0.3.0.jar
530654 Sun Mar 06 20:46:46 PST 2016 lib/scallop_2.10-0.9.5.jar
208781 Wed Jun 17 15:48:14 PDT 2015 lib/jline-2.11.jar
2047799 Wed Jun 17 15:48:16 PDT 2015 lib/shapeless_2.10.4-2.0.0.jar
11965 Wed Jun 17 15:48:14 PDT 2015 lib/scoobi-compatibility-hadoop2_2.10-1.0.3.jar
433368 Wed Jun 17 15:48:16 PDT 2015 lib/httpclient-4.2.5.jar
227275 Wed Jun 17 15:48:16 PDT 2015 lib/httpcore-4.2.4.jar
1330395 Fri Feb 12 16:58:02 PST 2016 lib/netty-3.10.4.Final.jar
166584 Wed Jun 17 15:48:22 PDT 2015 lib/avro-mapred-1.7.4-hadoop2.jar
187840 Wed Jun 17 15:48:22 PDT 2015 lib/avro-ipc-1.7.4.jar
449505 Wed Jun 17 15:34:42 PDT 2015 lib/velocity-1.7.jar
134133 Wed Jun 17 15:34:42 PDT 2015 lib/servlet-api-2.5-20081211.jar
266688 Wed Jun 17 15:48:22 PDT 2015 lib/avro-ipc-1.7.4-tests.jar
217053 Wed Jun 17 15:48:16 PDT 2015 lib/libthrift-0.9.1.jar
315805 Wed Jun 17 15:36:46 PDT 2015 lib/commons-lang3-3.1.jar
9398321 Wed Jun 17 15:46:56 PDT 2015 lib/scalaz-core_2.10-7.1.0.jar
716873 Wed Jun 17 15:48:18 PDT 2015 lib/scalaz-iteratee_2.10-7.1.0.jar
428107 Wed Jun 17 15:46:44 PDT 2015 lib/scalaz-effect_2.10-7.1.0.jar
349162 Wed Jun 17 15:46:42 PDT 2015 lib/scalaz-concurrent_2.10-7.1.0.jar
588962 Wed Jun 17 15:48:18 PDT 2015 lib/scalaz-scalacheck-binding_2.10-7.1.0.jar
376007 Wed Jun 17 15:48:18 PDT 2015 lib/scalaz-typelevel_2.10-7.1.0.jar
715291 Wed Jun 17 15:48:18 PDT 2015 lib/scalaz-xml_2.10-7.1.0.jar
872327 Wed Sep 02 00:31:30 PDT 2015 lib/scalacheck_2.10-1.11.4.jar
14755 Wed Jun 17 15:46:44 PDT 2015 lib/test-interface-1.0.jar
3478319 Sun Feb 28 17:00:08 PST 2016 lib/platform-scoobi_2.10-1.0-20160226.163738-103.jar
7350967 Sun Feb 28 17:00:18 PST 2016 lib/platform_2.10-1.0-20160226.163139-110.jar
121394 Sun Feb 28 17:00:18 PST 2016 lib/platform-macros_2.10-1.0-20160226.162947-109.jar
720704 Wed Jun 17 15:34:28 PDT 2015 lib/quasiquotes_2.10-2.0.1.jar
487652 Sun Feb 28 16:58:46 PST 2016 lib/sojourner-common-0.2.0-hadoop-2.4.1-EBAY-2.jar
194480 Mon Aug 31 16:27:42 PDT 2015 lib/nscala-time_2.10-2.2.0.jar
621992 Sun Feb 28 16:58:42 PST 2016 lib/joda-time-2.8.2.jar
38460 Wed Jun 17 15:48:24 PDT 2015 lib/joda-convert-1.2.jar
146396 Sun Mar 06 19:12:52 PST 2016 lib/scala-influxdb-client_2.10-0.4.3.jar
740350 Thu Mar 03 14:59:36 PST 2016 lib/async-http-client-1.9.31.jar
289719 Sun Mar 06 19:12:54 PST 2016 lib/spray-json_2.10-1.3.2.jar
25593 Mon Mar 07 16:46:34 PST 2016 lib/ingestion-service-0.0.1-SNAPSHOT.jar

Am i missing something ?

AsyncHttpClient conflicts with other Netty based libraries

First of all, thanks for this library is very useful.

I've tried to use scala-influxdb-client and expose services with Finagle (an RPC system form Twitter) which also uses Netty but a different version of Netty and a dependency versions conflict arises at run time.

The error is:

java.lang.NoSuchMethodError: io.netty.util.internal.PlatformDependent.newAtomicIntegerFieldUpdater(Ljava/lang/Class;Ljava/lang/String;)Ljava/util/concurrent/atomic/AtomicIntegerFieldUpdater;

For solving this error you can include async-http-client as an explicit dependency excluding netty with something like:

libraryDependencies += "com.paulgoldbaum" %% "scala-influxdb-client" % "0.5.2"

libraryDependencies += "org.asynchttpclient" % "async-http-client" % "2.0.32" exclude("org.jboss.netty","netty") exclude("io.netty","netty")

Please pay attention to the versions you use when including aync-http-client explicitly, obviously this will change with time.

Possible further solutions to this issue may include:

Upgrade InfluxDB support

I've tried the client with the latest release (0.11) of InfluxDB, but it seems that some API calls do not work any more. For example, the database.exists() always returns true for some reason. On the other hand, inserting and querying datapoints seems to work properly.

Can please upgrade the provided version of InfluxDB to the latest 0.11 version? Also, providing a lookup table which client version works up to which version of InfluxDB would be great.

Unable to use bulk write

com.paulgoldbaum scala-influxdb-client_2.11 0.4.3

Number of points:
points:5169

Error:

Exception in thread "main" java.lang.NoSuchMethodError: scala.Predef$.$conforms()Lscala/Predef$$less$colon$less;
at com.paulgoldbaum.influxdbclient.Database.buildWriteParameters(Database.scala:52)
at com.paulgoldbaum.influxdbclient.Database.executeWrite(Database.scala:32)
at com.paulgoldbaum.influxdbclient.Database.bulkWrite(Database.scala:28)

Is library still being supported?

I find this library pretty useful for my projects. But I am a bit confused about its maintenance and also I am interested in its further development.

@paulgoldbaum could you give an update to community please?

Thanks in advance.

Can not get result of multiple queries

It's possible to send multiple queries to influx by concatenating them with ;.
But the method com.paulgoldbaum.influxdbclient.InfluxDB.query returns only the first serie returned by the API.

The issue seems that the method com.paulgoldbaum.influxdbclient.QueryResult.fromJson is assuming that the results array as only one result see.

I'm not sure how you want to tackle this issue, so I'm not doing a PR.

My idea is to add a method to the class InfluxDB: query(query: Seq[String]): Future[QueryResult] or maybe query(query: Seq[String]): Future[Seq[QueryResult]]

What do you think of it?

Bulkwrite doesn't persist data

Hello!

I want to persist data in InfluxDB database. Storing point by point, the process is correct, but using bulkwrite for storing a list of points the process returns the following error:

com.paulgoldbaum.influxdbclient.UnknownErrorException: Error during write: An error occurred during the request at com.paulgoldbaum.influxdbclient.Database.exceptionFromStatusCode(Database.scala:65) at com.paulgoldbaum.influxdbclient.Database$$anonfun$executeWrite$1.applyOrElse(Database.scala:36) at com.paulgoldbaum.influxdbclient.Database$$anonfun$executeWrite$1.applyOrElse(Database.scala:36) at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:36) at scala.util.Failure$$anonfun$recover$1.apply(Try.scala:216) at scala.util.Try$.apply(Try.scala:192) at scala.util.Failure.recover(Try.scala:216) at scala.concurrent.Future$$anonfun$recover$1.apply(Future.scala:326) at scala.concurrent.Future$$anonfun$recover$1.apply(Future.scala:326) at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36) at scala.concurrent.impl.ExecutionContextImpl$AdaptedForkJoinTask.exec(ExecutionContextImpl.scala:121) at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) Caused by: com.paulgoldbaum.influxdbclient.HttpException: An error occurred during the request at com.paulgoldbaum.influxdbclient.HttpClient$ResponseHandler.onThrowable(HttpClient.scala:83) at org.asynchttpclient.netty.NettyResponseFuture.abort(NettyResponseFuture.java:278) at org.asynchttpclient.netty.channel.NettyConnectListener.onFailure(NettyConnectListener.java:181) at org.asynchttpclient.netty.channel.NettyChannelConnector$1.onFailure(NettyChannelConnector.java:108) at org.asynchttpclient.netty.SimpleChannelFutureListener.operationComplete(SimpleChannelFutureListener.java:28) at org.asynchttpclient.netty.SimpleChannelFutureListener.operationComplete(SimpleChannelFutureListener.java:20) at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:511) at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:485) at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:424) at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:121) at io.netty.bootstrap.Bootstrap.doResolveAndConnect0(Bootstrap.java:237) at io.netty.bootstrap.Bootstrap.access$000(Bootstrap.java:49) at io.netty.bootstrap.Bootstrap$1.operationComplete(Bootstrap.java:188) at io.netty.bootstrap.Bootstrap$1.operationComplete(Bootstrap.java:174) at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:511) at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:485) at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:424) at io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:103) at io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84) at io.netty.channel.AbstractChannel$AbstractUnsafe.safeSetSuccess(AbstractChannel.java:978) at io.netty.channel.AbstractChannel$AbstractUnsafe.register0(AbstractChannel.java:512) at io.netty.channel.AbstractChannel$AbstractUnsafe.access$200(AbstractChannel.java:423) at io.netty.channel.AbstractChannel$AbstractUnsafe$1.run(AbstractChannel.java:482) at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163) at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:404) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:465) at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:884) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.base/java.lang.Thread.run(Thread.java:844) Caused by: java.net.ConnectException: executor not accepting a task at org.asynchttpclient.netty.channel.NettyConnectListener.onFailure(NettyConnectListener.java:179) ... 26 more Caused by: java.lang.IllegalStateException: executor not accepting a task at io.netty.resolver.AddressResolverGroup.getResolver(AddressResolverGroup.java:60) at io.netty.bootstrap.Bootstrap.doResolveAndConnect0(Bootstrap.java:200) ... 18 more

Does anyone know how to solve it?

Thanks in advanced!

Enhancement: QueryResult doesn't return tags

Would be great to return the "tags" property as well. Currently I can't support a query with a GROUP BY clause since I can't access the grouped tuples. Would be glad to take this on and submit a PR if you'd like contributors on this repo.

Freezing on exit

After executing val influxdb = InfluxDB.connect("localhost", 8086), the program won't exit after finish the main function.

tag should not be empty string

If the tag is assigned an empty string, it only throws an exception saying the return code is 400 and fails to write the point. I think it should be more informative.

Below is how influxdb java client handle empty tag string.

class InfluxDBTest {
  @Test
  def testEmptyStringTag = {
    val influxDB = InfluxDBFactory.connect("http://localhost:8086", "admin", "admin")

    val point1 = Point.measurement("hello")
      .time(System.currentTimeMillis(), TimeUnit.MILLISECONDS)
      .tag("a", "")
      .addField("v", 13f)
      .build()

    influxDB.write("test", "default", point1)
  }
}

java.lang.RuntimeException: {"error":"unable to parse 'hello,a= v=13.0 1479448317053000000': missing tag value"}


	at org.influxdb.impl.InfluxDBImpl.execute(InfluxDBImpl.java:266)
	at org.influxdb.impl.InfluxDBImpl.write(InfluxDBImpl.java:167)
	at org.influxdb.impl.InfluxDBImpl.write(InfluxDBImpl.java:157)
	at samples.InfluxDBTest.testEmptyStringTag(InfluxDBTest.scala:23)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
	at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
	at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
	at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
	at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
	at com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:117)
	at com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:42)
	at com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:262)
	at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:84)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147)

Point.key also needs to be escaped

It looks like Point's key is not passed through escapeString before it's passed to InfluxDB. This means that you get HTTP Error 400's like this:

unable to parse 'some long key (hello),someTag=Hello\ World Some\ Short(value)=78i 1449741968000': invalid field format

Note the absence of \ before the spaces in the key.

Improve handling of empty aggregation sets

By default, InfluxDB will return null if an aggregation is performed and the grouping set has no values with the ability to default values through the use of the fill() keyword. Currently, a MalformedResponseException is thrown in this scenario. It'd be advantageous to either have a per-query method to handle this or at least throw a more specific exception for this situation since the feature isn't well documented by the Influx team.

Writing problem

Hi,

I'm using your library to communicate with influx. I met a problem when I'm writing data : all the data is not inserted like excepted. Here's the scenario : I have two list of Points, each have the same size of 742400. I'm doing this in a sequential way : I insert the points from the first list then the points from the second. For the first list, no problem all the points are inserted but for the second, only 74240 points are inserted. Some points from the lists share the same timestamp but using a tag and not field, the points are not overwritten. Here the way to build the list of points and the way to add into influx :

def map2points
      (
        m : scala.collection.mutable.Map[String, List[Double]], // matrix of points 
        channels : Array[String],
        awsKey : String,
        startime : Long,
        frequency : Double
      ) : List[Point] = {
      m.foldLeft(List[Point]()) { case (acc, (key, values)) =>
        val timed_points : List[Point] = values.zipWithIndex map { case (v, i) =>
          val ts = startime + (math round ((10e6 / frequency) * i)).toLong
          Point("titi",ts)
            .addTag("channel",key)
            .addField("value", v)
        }
        timed_points ++ acc
      }
    }
  }
val t4 = System.currentTimeMillis
 assert(points.size == 742400)
 Logger.info("Insertion status =  " + Await.result(database.bulkWrite(points, precision = Precision.MICROSECONDS), 10.seconds))
val t5 = System.currentTimeMillis

Which I find weird, the boolean returned by the bulkWrite function is true even all the points are not inserted. Do you have an idea why all the points are not inserted ? Maybe I use the library in a wrong way ?

Supporting other TSDBs

I'm working on a new project that requires a TSDB and was happily writing to InfluxDB (v0.11) with this excellent client. Scalability is an important requirement for me though and I've since switched to OpenTSDB now that InfluxDB will no longer include clustering support in their open source releases.

It would be great if this project could support OpenTSDB as well. The HTTP APIs are very similar (at least for writing) and the Database/use of AsyncHttpClient used in this project is very well implemented. The only real difference I've gleaned so far is that InfluxDB supports multiple fields where each metric in OpenTSDB is a single named value, however both support tags (arbitrary key/value pairs).

Would anyone be open for a discussion?

After modify retention policies,write data failed

show retention policies on mydb
name duration shardGroupDuration replicaN default

autogen 0s 24h0m0s 1 true

write ok ,but after modify retention policies duration to 48h,write data failed

show retention policies on mydb
name duration shardGroupDuration replicaN default

autogen 48h0m0s 24h0m0s 1 true


influxDB log is :
"POST /write?db=mydb&precision=ns&consistency=any HTTP/1.1" 400 68 "-" "AHC/2.0" 028d0f49-ffaf-11e9-8004-000000000000 131

encoding problem

There are encoding problems when tagv or fieldv are CJK characters.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.