Giter Club home page Giter Club logo

xoom-wire's Introduction

xoom-wire

Javadocs Build Download Gitter chat

The VLINGO XOOM platform SDK wire protocol messaging implementations, such as with full-duplex TCP and UDP multicast, and RSocket, using VLINGO XOOM ACTORS.

Docs: https://docs.vlingo.io/xoom-wire

Installation

  <dependencies>
    <dependency>
      <groupId>io.vlingo.xoom</groupId>
      <artifactId>xoom-wire</artifactId>
      <version>1.11.1</version>
      <scope>compile</scope>
    </dependency>
  </dependencies>
dependencies {
    compile 'io.vlingo.xoom:xoom-wire:1.11.1'
}

License (See LICENSE file for full license)

Copyright © 2012-2023 VLINGO LABS. All rights reserved.

This Source Code Form is subject to the terms of the Mozilla Public License, v. 2.0. If a copy of the MPL was not distributed with this file, You can obtain one at https://mozilla.org/MPL/2.0/.

Licenses for Dependencies

SSLSocketChannel support under org.baswell.niossl under Apache 2. Copyright 2015 Corey Baswell Corey's suggestion is to copy his source to your project, which we did due to Java version conflicts.

Guava is open source licensed under Apache 2 by The Guava Authors https://github.com/google/guava/blob/0d9470c009e7ae3a8f4de8582de832dc8dffb4a4/android/guava/src/com/google/common/base/Utf8.java

xoom-wire's People

Contributors

alanstrait avatar aleixmorgadas avatar alexguzun avatar buritos avatar d-led avatar danilo-ambrosio avatar davemuirhead avatar dependabot[bot] avatar hamzajg avatar jakzal avatar kbastani avatar olegdokuka avatar pflueras avatar timofurrer avatar vaughnvernon avatar vlingo-java avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

xoom-wire's Issues

Too much overhead from copying ByteBuffers

My theory is that all the copying can be eliminated with some rearrangement of the code that performs the release. My gut feeling is that the code is structured in this way (eager to release) because of the limitations of the previous pool implementation (failure to meet demand). Given the scalable nature of the new implementation (we pay the price for it), it is ok to hold on to a buffer instance for longer, meaning that it is ok to release it at the final consumer, as long as the producers do not attempt to reuse it in any way (they should always acquire another buffer from the pool).

@alexguzun confirms (although a bit concerned about the rippling effects of changing the API for ManagedOutboundChannel::write). I believe we can work around that, preserving semantics of write(ByteBuffer) by overloading write to also accept ConusmerByteBuffer, then implement write(ByteBuffer) in terms of write(ConsumerByteBuffer), all of which are very local to wire.

Originally posted by @buritos in #16 (comment)

Memory leak of ConsumerByteBufferPool objects

This issue can be easily reproduced with vlingo-helloworld example.

vlingo-actors.properties configuration:

plugin.name.pooledCompletes = true
plugin.pooledCompletes.classname = io.vlingo.actors.plugin.completes.PooledCompletesPlugin
plugin.pooledCompletes.pool = 50
plugin.pooledCompletes.mailbox = queueMailbox

plugin.name.queueMailbox = true
plugin.queueMailbox.classname = io.vlingo.actors.plugin.mailbox.concurrentqueue.ConcurrentQueueMailboxPlugin
plugin.queueMailbox.defaultMailbox = true
plugin.queueMailbox.numberOfDispatchersFactor = 0
plugin.queueMailbox.numberOfDispatchers = 180
plugin.queueMailbox.dispatcherThrottlingCount = 1

# ... because of vlingo-xoom
plugin.name.slf4jLogger=true
plugin.slf4jLogger.classname=io.vlingo.actors.plugin.logging.slf4j.Slf4jLoggerPlugin
plugin.slf4jLogger.name=vlingo/xoom
plugin.slf4jLogger.defaultLogger=true

A test case with 150 concurrent threads has been used. Each thread performs following requests:

  • POST /greetings
  • PATCH /greetings/${greetingId}/message
  • PATCH /greetings/${greetingId}/description
  • GET /greetings/${greetingId}

OutOfMemoryError shows up quite quickly. Heap size is 1024m (-Xmx1024m).

Eliminate Bare TestUntil Uses

The following tests must be converted to use AccessSafely rather than bare TestUntil:

  • SocketRequestResponseChannelTest
  • TestRequestChannelConsumer (mock)
  • TestRequestChannelConsumerProvider (mock)
  • TestResponseChannelConsumer (mock)
  • InboundStreamTest
  • MockInboundStreamInterest (mock)

RSocket Timeouts and Connection Errors

Got the following while testing lattice grid. It happened after stopping a node and bringing it back up again. After that error, a grid actor that was supposed to relocate back to the node that restarted, never made it through.

00:37:15.436 [pool-2-thread-3] WARN  io.vlingo.actors.Logger - Failed to create RSocket outbound channel for Address[Host[localhost],37371,OP], because java.util.concurrent.TimeoutException: Did not observe any item or terminal signal within 100ms in 'flatMap' (and no fallback has been configured)

The following is another one frequently seen in the logs. So far it does not seem to interfere with anything, but it could be that we are misusing RSocket, recovering from the error with a workaround, instead of properly sending the expected keep-alive signals.

io.rsocket.exceptions.ConnectionErrorException: No keep-alive acks for 90000 ms
	at io.rsocket.RSocketRequester.terminate(RSocketRequester.java:115) ~[rsocket-core-1.0.0-RC5.jar:na]
	at io.rsocket.keepalive.KeepAliveSupport.tryTimeout(KeepAliveSupport.java:110) ~[rsocket-core-1.0.0-RC5.jar:na]
	at io.rsocket.keepalive.KeepAliveSupport$ClientKeepAliveSupport.onIntervalTick(KeepAliveSupport.java:146) ~[rsocket-core-1.0.0-RC5.jar:na]
	at io.rsocket.keepalive.KeepAliveSupport.lambda$start$0(KeepAliveSupport.java:54) ~[rsocket-core-1.0.0-RC5.jar:na]
	at reactor.core.publisher.LambdaSubscriber.onNext(LambdaSubscriber.java:160) ~[reactor-core-3.3.0.RELEASE.jar:3.3.0.RELEASE]
	at reactor.core.publisher.FluxInterval$IntervalRunnable.run(FluxInterval.java:123) ~[reactor-core-3.3.0.RELEASE.jar:3.3.0.RELEASE]
	at reactor.core.scheduler.PeriodicWorkerTask.call(PeriodicWorkerTask.java:59) ~[reactor-core-3.3.0.RELEASE.jar:3.3.0.RELEASE]
	at reactor.core.scheduler.PeriodicWorkerTask.run(PeriodicWorkerTask.java:73) ~[reactor-core-3.3.0.RELEASE.jar:3.3.0.RELEASE]
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[na:1.8.0_241]
	at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) ~[na:1.8.0_241]
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) ~[na:1.8.0_241]
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) ~[na:1.8.0_241]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[na:1.8.0_241]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[na:1.8.0_241]
	at java.lang.Thread.run(Thread.java:748) ~[na:1.8.0_241]

Flaky test RSocketServerChannelActorTest

[INFO] Running io.vlingo.wire.fdx.bidirectional.rsocket.RSocketServerChannelActorTest
17:13:23.989 [pool-11-thread-3] INFO  io.vlingo.actors.Logger - RSocket inbound channel OP at port 37372 is closed
17:13:23.998 [pool-11-thread-2] INFO  i.v.w.f.o.r.RSocketOutboundChannel - RSocket outbound channel for Address[Host[localhost],37371,OP] is closed
17:13:24.002 [pool-11-thread-1] INFO  i.v.w.f.o.r.RSocketOutboundChannel - RSocket outbound channel for Address[Host[localhost],37371,OP] is closed
17:13:24.019 [pool-20-thread-2] INFO  io.vlingo.actors.Logger - io.vlingo.wire.fdx.bidirectional.rsocket.RSocketServerChannelActor - RSocket server channel opened at port 0
17:13:24.104 [main] INFO  io.vlingo.actors.Logger - RSocket client channel opened for address Address[Host[127.0.0.1],0,NONE]
17:13:24.143 [pool-20-thread-1] INFO  io.vlingo.actors.Logger - io.vlingo.wire.fdx.bidirectional.rsocket.RSocketServerChannelActor - RSocket server channel closed
17:13:24.158 [main] ERROR io.vlingo.actors.Logger - RSocket client channel for address Address[Host[127.0.0.1],0,NONE] received unrecoverable error
java.nio.channels.ClosedChannelException: null
17:13:24.162 [pool-20-thread-1] ERROR io.vlingo.actors.Logger - io.vlingo.wire.fdx.bidirectional.rsocket.RSocketServerChannelActor - Unexpected error in server channel
java.nio.channels.ClosedChannelException: null
17:13:24.166 [main] INFO  io.vlingo.actors.Logger - RSocket client channel for address Address[Host[127.0.0.1],0,NONE] is closed
17:13:25.259 [pool-20-thread-3] DEBUG io.vlingo.actors.Logger - io.vlingo.actors.PrivateRootActor - io.vlingo.actors.DeadLettersActor - DeadLetter[Actor[type=Slf4jLoggerActor address=Address[id=11, name=(none)]].close()]
17:13:25.269 [pool-23-thread-2] INFO  io.vlingo.actors.Logger - io.vlingo.wire.fdx.bidirectional.rsocket.RSocketServerChannelActor - RSocket server channel opened at port 0
17:13:25.276 [main] INFO  io.vlingo.actors.Logger - RSocket client channel opened for address Address[Host[127.0.0.1],0,NONE]
17:13:25.320 [pool-23-thread-2] INFO  io.vlingo.actors.Logger - io.vlingo.wire.fdx.bidirectional.rsocket.RSocketServerChannelActor - RSocket server channel closed
17:13:25.322 [main] ERROR io.vlingo.actors.Logger - RSocket client channel for address Address[Host[127.0.0.1],0,NONE] received unrecoverable error
java.nio.channels.ClosedChannelException: null
17:13:25.325 [pool-23-thread-2] ERROR io.vlingo.actors.Logger - io.vlingo.wire.fdx.bidirectional.rsocket.RSocketServerChannelActor - Unexpected error in server channel
java.nio.channels.ClosedChannelException: null
17:13:25.325 [main] INFO  io.vlingo.actors.Logger - RSocket client channel for address Address[Host[127.0.0.1],0,NONE] is closed
17:13:26.417 [pool-23-thread-3] DEBUG io.vlingo.actors.Logger - io.vlingo.actors.PrivateRootActor - io.vlingo.actors.DeadLettersActor - DeadLetter[Actor[type=Slf4jLoggerActor address=Address[id=11, name=(none)]].close()]
17:13:26.429 [pool-25-thread-1] INFO  io.vlingo.actors.Logger - io.vlingo.wire.fdx.bidirectional.rsocket.RSocketServerChannelActor - RSocket server channel opened at port 0
17:13:26.433 [main] INFO  io.vlingo.actors.Logger - RSocket client channel opened for address Address[Host[127.0.0.1],0,NONE]
17:13:26.737 [main] ERROR io.vlingo.actors.Logger - RSocket client channel for address Address[Host[127.0.0.1],0,NONE] received unrecoverable error
java.nio.channels.ClosedChannelException: null
17:13:26.737 [main] INFO  io.vlingo.actors.Logger - RSocket client channel for address Address[Host[127.0.0.1],0,NONE] is closed
17:13:26.738 [pool-25-thread-1] ERROR io.vlingo.actors.Logger - io.vlingo.wire.fdx.bidirectional.rsocket.RSocketServerChannelActor - Unexpected error in server channel
java.nio.channels.ClosedChannelException: null
17:13:26.738 [pool-25-thread-1] INFO  io.vlingo.actors.Logger - io.vlingo.wire.fdx.bidirectional.rsocket.RSocketServerChannelActor - RSocket server channel closed
17:13:27.829 [pool-25-thread-2] DEBUG io.vlingo.actors.Logger - io.vlingo.actors.PrivateRootActor - io.vlingo.actors.DeadLettersActor - DeadLetter[Actor[type=Slf4jLoggerActor address=Address[id=11, name=(none)]].close()]
17:13:27.837 [pool-27-thread-1] INFO  io.vlingo.actors.Logger - io.vlingo.wire.fdx.bidirectional.rsocket.RSocketServerChannelActor - RSocket server channel opened at port 0
17:13:27.841 [main] INFO  io.vlingo.actors.Logger - RSocket client channel opened for address Address[Host[127.0.0.1],0,NONE]
17:13:27.858 [main] ERROR io.vlingo.actors.Logger - Unexpected error reading incoming payload
java.lang.NullPointerException: null
	at io.vlingo.wire.fdx.bidirectional.TestResponseChannelConsumer.consume(TestResponseChannelConsumer.java:48) ~[test-classes/:na]
	at io.vlingo.wire.fdx.bidirectional.rsocket.RSocketClientChannel$ChannelResponseHandler.handle(RSocketClientChannel.java:150) [classes/:na]
	at io.vlingo.wire.fdx.bidirectional.rsocket.RSocketClientChannel$ChannelResponseHandler.access$100(RSocketClientChannel.java:135) [classes/:na]
	at io.vlingo.wire.fdx.bidirectional.rsocket.RSocketClientChannel.lambda$prepareChannel$3(RSocketClientChannel.java:114) [classes/:na]
	at reactor.core.publisher.LambdaSubscriber.onNext(LambdaSubscriber.java:160) ~[reactor-core-3.3.4.RELEASE.jar:3.3.4.RELEASE]
	at reactor.core.publisher.SerializedSubscriber.onNext(SerializedSubscriber.java:89) ~[reactor-core-3.3.4.RELEASE.jar:3.3.4.RELEASE]
	at reactor.core.publisher.FluxRetryWhen$RetryWhenMainSubscriber.onNext(FluxRetryWhen.java:162) ~[reactor-core-3.3.4.RELEASE.jar:3.3.4.RELEASE]
	at reactor.core.publisher.FluxDoFinally$DoFinallySubscriber.onNext(FluxDoFinally.java:123) ~[reactor-core-3.3.4.RELEASE.jar:3.3.4.RELEASE]
	at reactor.core.publisher.FluxPeekFuseable$PeekFuseableSubscriber.onNext(FluxPeekFuseable.java:203) ~[reactor-core-3.3.4.RELEASE.jar:3.3.4.RELEASE]
	at reactor.core.publisher.FluxPeekFuseable$PeekFuseableSubscriber.onNext(FluxPeekFuseable.java:203) ~[reactor-core-3.3.4.RELEASE.jar:3.3.4.RELEASE]
	at reactor.core.publisher.FluxPeekFuseable$PeekFuseableSubscriber.onNext(FluxPeekFuseable.java:203) ~[reactor-core-3.3.4.RELEASE.jar:3.3.4.RELEASE]
	at reactor.core.publisher.UnicastProcessor.drainRegular(UnicastProcessor.java:240) ~[reactor-core-3.3.4.RELEASE.jar:3.3.4.RELEASE]
	at reactor.core.publisher.UnicastProcessor.drain(UnicastProcessor.java:312) ~[reactor-core-3.3.4.RELEASE.jar:3.3.4.RELEASE]
	at reactor.core.publisher.UnicastProcessor.onNext(UnicastProcessor.java:386) ~[reactor-core-3.3.4.RELEASE.jar:3.3.4.RELEASE]
	at io.rsocket.RSocketRequester.handleFrame(RSocketRequester.java:494) ~[rsocket-core-1.0.0-RC7-SNAPSHOT.jar:na]
	at io.rsocket.RSocketRequester.handleIncomingFrames(RSocketRequester.java:441) ~[rsocket-core-1.0.0-RC7-SNAPSHOT.jar:na]
	at reactor.core.publisher.LambdaSubscriber.onNext(LambdaSubscriber.java:160) ~[reactor-core-3.3.4.RELEASE.jar:3.3.4.RELEASE]
	at reactor.core.publisher.MonoFlatMapMany$FlatMapManyInner.onNext(MonoFlatMapMany.java:242) ~[reactor-core-3.3.4.RELEASE.jar:3.3.4.RELEASE]
	at reactor.core.publisher.FluxGroupBy$UnicastGroupedFlux.drainRegular(FluxGroupBy.java:554) ~[reactor-core-3.3.4.RELEASE.jar:3.3.4.RELEASE]
	at reactor.core.publisher.FluxGroupBy$UnicastGroupedFlux.drain(FluxGroupBy.java:630) ~[reactor-core-3.3.4.RELEASE.jar:3.3.4.RELEASE]
	at reactor.core.publisher.FluxGroupBy$UnicastGroupedFlux.onNext(FluxGroupBy.java:670) ~[reactor-core-3.3.4.RELEASE.jar:3.3.4.RELEASE]
	at reactor.core.publisher.FluxGroupBy$GroupByMain.onNext(FluxGroupBy.java:205) ~[reactor-core-3.3.4.RELEASE.jar:3.3.4.RELEASE]
	at io.rsocket.internal.UnboundedProcessor.drainRegular(UnboundedProcessor.java:118) ~[rsocket-core-1.0.0-RC7-SNAPSHOT.jar:na]
	at io.rsocket.internal.UnboundedProcessor.drain(UnboundedProcessor.java:188) ~[rsocket-core-1.0.0-RC7-SNAPSHOT.jar:na]
	at io.rsocket.internal.UnboundedProcessor.onNext(UnboundedProcessor.java:276) ~[rsocket-core-1.0.0-RC7-SNAPSHOT.jar:na]
	at reactor.core.publisher.FluxPeekFuseable$PeekFuseableSubscriber.onNext(FluxPeekFuseable.java:189) ~[reactor-core-3.3.4.RELEASE.jar:3.3.4.RELEASE]
	at io.rsocket.internal.UnboundedProcessor.drainRegular(UnboundedProcessor.java:118) ~[rsocket-core-1.0.0-RC7-SNAPSHOT.jar:na]
	at io.rsocket.internal.UnboundedProcessor.drain(UnboundedProcessor.java:188) ~[rsocket-core-1.0.0-RC7-SNAPSHOT.jar:na]
	at io.rsocket.internal.UnboundedProcessor.onNext(UnboundedProcessor.java:276) ~[rsocket-core-1.0.0-RC7-SNAPSHOT.jar:na]
	at io.rsocket.RSocketResponder$3.hookOnNext(RSocketResponder.java:461) ~[rsocket-core-1.0.0-RC7-SNAPSHOT.jar:na]
	at io.rsocket.RSocketResponder$3.hookOnNext(RSocketResponder.java:447) ~[rsocket-core-1.0.0-RC7-SNAPSHOT.jar:na]
	at reactor.core.publisher.BaseSubscriber.onNext(BaseSubscriber.java:160) ~[reactor-core-3.3.4.RELEASE.jar:3.3.4.RELEASE]
	at io.rsocket.internal.RateLimitableRequestPublisher$InnerOperator.onNext(RateLimitableRequestPublisher.java:173) ~[rsocket-core-1.0.0-RC7-SNAPSHOT.jar:na]
	at reactor.core.publisher.UnicastProcessor.drainRegular(UnicastProcessor.java:240) ~[reactor-core-3.3.4.RELEASE.jar:3.3.4.RELEASE]
	at reactor.core.publisher.UnicastProcessor.drain(UnicastProcessor.java:312) ~[reactor-core-3.3.4.RELEASE.jar:3.3.4.RELEASE]
	at reactor.core.publisher.UnicastProcessor.subscribe(UnicastProcessor.java:427) ~[reactor-core-3.3.4.RELEASE.jar:3.3.4.RELEASE]
	at reactor.core.publisher.Flux.subscribe(Flux.java:8264) ~[reactor-core-3.3.4.RELEASE.jar:3.3.4.RELEASE]
	at io.rsocket.internal.RateLimitableRequestPublisher.subscribe(RateLimitableRequestPublisher.java:74) ~[rsocket-core-1.0.0-RC7-SNAPSHOT.jar:na]
	at io.rsocket.RSocketResponder.handleStream(RSocketResponder.java:446) ~[rsocket-core-1.0.0-RC7-SNAPSHOT.jar:na]
	at io.rsocket.RSocketResponder.handleChannel(RSocketResponder.java:502) ~[rsocket-core-1.0.0-RC7-SNAPSHOT.jar:na]
	at io.rsocket.RSocketResponder.handleFrame(RSocketResponder.java:315) ~[rsocket-core-1.0.0-RC7-SNAPSHOT.jar:na]
	at reactor.core.publisher.LambdaSubscriber.onNext(LambdaSubscriber.java:160) ~[reactor-core-3.3.4.RELEASE.jar:3.3.4.RELEASE]
	at reactor.core.publisher.MonoFlatMapMany$FlatMapManyInner.onNext(MonoFlatMapMany.java:242) ~[reactor-core-3.3.4.RELEASE.jar:3.3.4.RELEASE]
	at reactor.core.publisher.FluxGroupBy$UnicastGroupedFlux.drainRegular(FluxGroupBy.java:554) ~[reactor-core-3.3.4.RELEASE.jar:3.3.4.RELEASE]
	at reactor.core.publisher.FluxGroupBy$UnicastGroupedFlux.drain(FluxGroupBy.java:630) ~[reactor-core-3.3.4.RELEASE.jar:3.3.4.RELEASE]
	at reactor.core.publisher.FluxGroupBy$UnicastGroupedFlux.subscribe(FluxGroupBy.java:696) ~[reactor-core-3.3.4.RELEASE.jar:3.3.4.RELEASE]
	at reactor.core.publisher.Flux.subscribe(Flux.java:8264) ~[reactor-core-3.3.4.RELEASE.jar:3.3.4.RELEASE]
	at reactor.core.publisher.MonoFlatMapMany$FlatMapManyMain.onNext(MonoFlatMapMany.java:188) ~[reactor-core-3.3.4.RELEASE.jar:3.3.4.RELEASE]
	at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1712) ~[reactor-core-3.3.4.RELEASE.jar:3.3.4.RELEASE]
	at reactor.core.publisher.MonoProcessor.onNext(MonoProcessor.java:317) ~[reactor-core-3.3.4.RELEASE.jar:3.3.4.RELEASE]
	at io.rsocket.internal.ClientServerInputMultiplexer.lambda$new$1(ClientServerInputMultiplexer.java:116) ~[rsocket-core-1.0.0-RC7-SNAPSHOT.jar:na]
	at reactor.core.publisher.LambdaSubscriber.onNext(LambdaSubscriber.java:160) ~[reactor-core-3.3.4.RELEASE.jar:3.3.4.RELEASE]
	at reactor.core.publisher.FluxGroupBy$GroupByMain.drainLoop(FluxGroupBy.java:380) ~[reactor-core-3.3.4.RELEASE.jar:3.3.4.RELEASE]
	at reactor.core.publisher.FluxGroupBy$GroupByMain.drain(FluxGroupBy.java:316) ~[reactor-core-3.3.4.RELEASE.jar:3.3.4.RELEASE]
	at reactor.core.publisher.FluxGroupBy$GroupByMain.onNext(FluxGroupBy.java:201) ~[reactor-core-3.3.4.RELEASE.jar:3.3.4.RELEASE]
	at io.rsocket.internal.UnboundedProcessor.drainRegular(UnboundedProcessor.java:118) ~[rsocket-core-1.0.0-RC7-SNAPSHOT.jar:na]
	at io.rsocket.internal.UnboundedProcessor.drain(UnboundedProcessor.java:188) ~[rsocket-core-1.0.0-RC7-SNAPSHOT.jar:na]
	at io.rsocket.internal.UnboundedProcessor.onNext(UnboundedProcessor.java:276) ~[rsocket-core-1.0.0-RC7-SNAPSHOT.jar:na]
	at reactor.core.publisher.FluxPeekFuseable$PeekFuseableSubscriber.onNext(FluxPeekFuseable.java:189) ~[reactor-core-3.3.4.RELEASE.jar:3.3.4.RELEASE]
	at io.rsocket.internal.UnboundedProcessor.drainRegular(UnboundedProcessor.java:118) ~[rsocket-core-1.0.0-RC7-SNAPSHOT.jar:na]
	at io.rsocket.internal.UnboundedProcessor.drain(UnboundedProcessor.java:188) ~[rsocket-core-1.0.0-RC7-SNAPSHOT.jar:na]
	at io.rsocket.internal.UnboundedProcessor.onNext(UnboundedProcessor.java:276) ~[rsocket-core-1.0.0-RC7-SNAPSHOT.jar:na]
	at io.rsocket.RSocketRequester$3$1.hookOnNext(RSocketRequester.java:358) ~[rsocket-core-1.0.0-RC7-SNAPSHOT.jar:na]
	at io.rsocket.RSocketRequester$3$1.hookOnNext(RSocketRequester.java:333) ~[rsocket-core-1.0.0-RC7-SNAPSHOT.jar:na]
	at reactor.core.publisher.BaseSubscriber.onNext(BaseSubscriber.java:160) ~[reactor-core-3.3.4.RELEASE.jar:3.3.4.RELEASE]
	at io.rsocket.internal.RateLimitableRequestPublisher$InnerOperator.onNext(RateLimitableRequestPublisher.java:173) ~[rsocket-core-1.0.0-RC7-SNAPSHOT.jar:na]
	at reactor.core.publisher.EmitterProcessor.drain(EmitterProcessor.java:426) ~[reactor-core-3.3.4.RELEASE.jar:3.3.4.RELEASE]
	at reactor.core.publisher.EmitterProcessor.onNext(EmitterProcessor.java:268) ~[reactor-core-3.3.4.RELEASE.jar:3.3.4.RELEASE]
	at io.vlingo.wire.fdx.bidirectional.rsocket.RSocketClientChannel.requestWith(RSocketClientChannel.java:76) [classes/:na]
	at io.vlingo.wire.fdx.bidirectional.rsocket.RSocketServerChannelActorTest.request(RSocketServerChannelActorTest.java:63) ~[test-classes/:na]
	at io.vlingo.wire.fdx.bidirectional.BaseServerChannelTest.testBasicRequestResponse(BaseServerChannelTest.java:45) ~[test-classes/:na]
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_151]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_151]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_151]
	at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_151]
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) ~[junit-4.11.jar:na]
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) ~[junit-4.11.jar:na]
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) ~[junit-4.11.jar:na]
	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) ~[junit-4.11.jar:na]
	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) ~[junit-4.11.jar:na]
	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) ~[junit-4.11.jar:na]
	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) ~[junit-4.11.jar:na]
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70) ~[junit-4.11.jar:na]
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) ~[junit-4.11.jar:na]
	at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) ~[junit-4.11.jar:na]
	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) ~[junit-4.11.jar:na]
	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) ~[junit-4.11.jar:na]
	at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) ~[junit-4.11.jar:na]
	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) ~[junit-4.11.jar:na]
	at org.junit.runners.ParentRunner.run(ParentRunner.java:309) ~[junit-4.11.jar:na]
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) ~[surefire-junit4-2.21.0.jar:2.21.0]
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273) ~[surefire-junit4-2.21.0.jar:2.21.0]
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238) ~[surefire-junit4-2.21.0.jar:2.21.0]
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159) ~[surefire-junit4-2.21.0.jar:2.21.0]
	at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379) ~[surefire-booter-2.21.0.jar:2.21.0]
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:340) ~[surefire-booter-2.21.0.jar:2.21.0]
	at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125) ~[surefire-booter-2.21.0.jar:2.21.0]
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:413) ~[surefire-booter-2.21.0.jar:2.21.0]
No output has been received in the last 10m0s, this potentially indicates a stalled build or something wrong with the build itself.

SocketChannel looping infinitely while handling probe interval

The SockeChannelSelectionProcessorActor loops infinitely while trying to perform some Rest operations on the calculator app, which is a recent implemented vlingo-xoom example running on Heroku environment.

The following thread dump was captured from Heroku's jStack when the issue occurred.

"pool-3-thread-11" #29 prio=5 os_prio=0 tid=0x00007f1e68ac3000 nid=0x6a runnable [0x00007f1e381de000]
   java.lang.Thread.State: RUNNABLE
	at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
	at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
	at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
	at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
	- locked <0x00000000fe8ac860> (a sun.nio.ch.Util$3)
	- locked <0x00000000fe8ac850> (a java.util.Collections$UnmodifiableSet)
	- locked <0x00000000fe8ac870> (a sun.nio.ch.EPollSelectorImpl)
	at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
	at io.vlingo.wire.channel.SocketChannelSelectionProcessorActor.probeChannel(SocketChannelSelectionProcessorActor.java:156)
	at io.vlingo.wire.channel.SocketChannelSelectionProcessorActor.intervalSignal(SocketChannelSelectionProcessorActor.java:105)
	at io.vlingo.common.Scheduled__Proxy.lambda$intervalSignal$0(Scheduled__Proxy.java:31)
	at io.vlingo.common.Scheduled__Proxy$$Lambda$373/1199491717.accept(Unknown Source)
	at io.vlingo.actors.LocalMessage.internalDeliver(LocalMessage.java:115)
	at io.vlingo.actors.LocalMessage.deliver(LocalMessage.java:47)
	at io.vlingo.actors.plugin.mailbox.concurrentqueue.ConcurrentQueueMailbox.run(ConcurrentQueueMailbox.java:101)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)

The app is using Java 8 and running on Ubuntu-based Heroku's OS.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.