Giter Club home page Giter Club logo

storm-dynamic-spout's People

Contributors

crim avatar daniel-dara avatar dependabot[bot] avatar igor-sfdc avatar lumenvfintegration avatar mmoldavan avatar ryanguest avatar snyk-bot avatar sr avatar stanlemon avatar svc-scm avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

storm-dynamic-spout's Issues

Inconsistent default config values

There is a minor inconsistency in that some config values are defaulted in SpoutConfig while others are in the retry manager. In some cases values are defaulted in both which seems redundant and the values even differ in some cases.

See RETRY_MANAGER_INITIAL_DELAY_MS
https://github.com/salesforce/storm-dynamic-spout/blob/master/src/main/java/com/salesforce/storm/spout/dynamic/config/SpoutConfig.java#L481
https://github.com/salesforce/storm-dynamic-spout/blob/master/src/main/java/com/salesforce/storm/spout/dynamic/retry/DefaultRetryManager.java#L58

SpoutMonitor should report metrics on failed Virtual spouts

Currently spout monitor reports the following metrics. We should update this to include how many of these instances have failed, or completed exceptionally

        // Report to metrics record
        getMetricsRecorder().assignValue(getClass(), "bufferSize", tupleOutputQueue.size());
        getMetricsRecorder().assignValue(getClass(), "running", executor.getActiveCount());
        getMetricsRecorder().assignValue(getClass(), "queued", executor.getQueue().size());
        getMetricsRecorder().assignValue(getClass(), "completed", executor.getCompletedTaskCount());
        getMetricsRecorder().assignValue(getClass(), "poolSize", executor.getPoolSize());

Test / Verify what happens with zookeeper cluster becomes unavailable

Test Procedure

  • Start a test topology against a kafka cluster
  • Shutdown one or more zookeeper instances backing the cluster
  • Determine how the spout responds.
  • Start zookeeper instances.

Expected Behavior
It's expected that the spout will gracefully degrade, and once the zk instance becomes available again it correct resumes consuming. If this is NOT what happens, lodge additional issues/work items to correct the behavior.

[SIDELINE] Cleanup PersistenceAdapter.persistSidelineRequestState()

  • The 'id' is included in the 'request' already, this is an artifact from the past - we should remove it outright.
  • The 'type' could be moved into the 'request', which probably makes more sense anyhow.
  • ConsumerPartition + offsets could be grouped together in a single ConsumerState object, this would move blowing up and persisting by partition into the persistence adapter, but should not functionally change anything. It'll cleanup the logic in the handler quite a bit too, and probably makes more sense in the adapter anyhow since the reason we do it the way we do is very specific for Zookeeper.

SidelineSpoutHandler::loadSidelines fails to relaunch the Firehose VirtualSpout

This behavior is shown in test: SidelineSpoutHandlerTest::testLoadSidelines

More specifically in these code lines the test removes running VirtualSpouts. It then calls loadSidelines() which triggers these lines to fire:

SidelineSpoutHandler.java

        // After altering the filter chain is complete, lets NOW start the fire hose
        // This keeps a race condition where the fire hose could start consuming before filter chain
        // steps get added.
        if (!spout.hasVirtualSpout(fireHoseIdentifier)) {
            spout.addVirtualSpout(fireHoseSpout);
        }

What is happening here the Firehose VirtualSpout instance already exists, but its not "running" so it attempts to re-add it to the Coordinator. The coordinator fires it up, but when it calls open() on the VirtualSpout it explodes complaining that it's already been opened. You end up with exceptions like these:

15:03:31.841 ERROR c.s.s.s.d.c.SpoutRunner [[DynamicSpout:SpoutCoordinator] VirtualSpout Pool 3 on Mock:0 VirtualSpoutPrefix:main]: SpoutRunner for VirtualSpoutPrefix:main threw an exception Cannot call open more than once for VirtualSpoutId:VirtualSpoutPrefix:main
java.lang.IllegalStateException: Cannot call open more than once for VirtualSpoutId:VirtualSpoutPrefix:main
	at com.salesforce.storm.spout.dynamic.VirtualSpout.open(VirtualSpout.java:186) ~[classes/:?]
	at com.salesforce.storm.spout.dynamic.coordinator.SpoutRunner.run(SpoutRunner.java:115) [classes/:?]
	at java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1626) [?:1.8.0_144]

I'm unsure if this is a bug in SidelineSpoutHandler, in that it shouldn't try to re-start VirtualSpouts that are already opened, but instead re-create them? Or if its a bug in the test case scenario?

Upgrade Storm Dependency to 1.1.1

Upgrade Storm Dependency to 1.1.1.

This dependency is scoped at provided, and Storm 1.1.x <--> Storm 1.0.x appear to be compatible, so we should be able to continue supporting both versions.

Add consumer id to consumer record

The consumer id is a handy piece of information that could be used by a topology component to determine which virtual spout a tuple is associated with. Adding the consumer id to the consumer record and passing it into the deserializer's deserialize method should allow users to extend the deserializer to make use of it. This came out of a conversation with Stan so he should have more details if necessary.

Race condition on VirtualSpout close cleaning up consumer state

Summary

On VirtualSpout.close() when a sideline has been completed, it attempts to cleanup consumer state. If you have multiple Spout instances (and therefor multiple VirtualSpout instances for the sideline) you can run into a race condition cleaning up consumer state. When it attempts to delete the parent state node in zookeeper, it blindly attempts to wipe the parent node, but does not handle if the parent node has already been wiped by another instance.

Stack Trace

2017-12-13 16:44:41.875 topo-id] c.s.s.s.d.c.SpoutRunner [ERROR] SpoutRunner for topo-id:sideline:14064E9096F16FF755BD21C640976428 threw an exception org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /sideline-spout/topo-id/consumers/topo-id:sideline:14064E9096F16FF755BD21C640976428 
java.lang.RuntimeException: org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /sideline-spout/topo-id/consumers/topo-id:sideline:14064E9096F16FF755BD21C640976428
	at com.salesforce.storm.spout.dynamic.persistence.zookeeper.CuratorHelper.deleteNodeIfNoChildren(CuratorHelper.java:184) ~[stormjar.jar:?]
	at com.salesforce.storm.spout.dynamic.persistence.ZookeeperPersistenceAdapter.clearConsumerState(ZookeeperPersistenceAdapter.java:168) ~[stormjar.jar:?]
	at com.salesforce.storm.spout.dynamic.kafka.Consumer.removeConsumerState(Consumer.java:620) ~[stormjar.jar:?]
	at com.salesforce.storm.spout.dynamic.VirtualSpout.close(VirtualSpout.java:260) ~[stormjar.jar:?]
	at com.salesforce.storm.spout.dynamic.coordinator.SpoutRunner.run(SpoutRunner.java:181) [stormjar.jar:?]
	at java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1626) [?:1.8.0_102]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_102]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_102]
	at java.lang.Thread.run(Thread.java:745) [?:1.8.0_102]

Trim Whitespace from Outputfields

I think it would improve config readability if whitespace was trimmed from the output fields list so elements can spaced out in config files. Currently whitespace is used as part of the tuple fieldname. That said I'm not too familiar with yaml conventions so if whitespace is typically excluded from list values then perhaps it's better left alone.

fields = new Fields(((String) fieldsCfgValue).split(","));

README needs overhaul.

The README currently describes the DynamicSpout framework and the Sideline implementation almost hand in hand. We should review the README and pull these concepts apart and describe each independently.

[SIDELINE] Better checking for bad filter chain steps

java.lang.NullPointerException at java.util.concurrent.ConcurrentHashMap.putVal(ConcurrentHashMap.java:1011) at java.util.concurrent.ConcurrentHashMap.put(ConcurrentHashMap.java:1006) at com.salesforce.storm.spout.sideline.filter.FilterChain.addStep(FilterChain.java:49) at com.salesforce.storm.spout.sideline.handler.SidelineSpoutHandler.onSpoutOpen(SidelineSpoutHandler.java:186) at com.salesforce.storm.spout.dynamic.DynamicSpout.open(DynamicSpout.java:186) at org.apache.storm.daemon.executor$fn_4962$fn4977.invoke(executor.clj:602) at org.apache.storm.util$async_loop$fn_557.invoke(util.clj:482) at clojure.lang.AFn.run(AFn.java:22) at java.lang.Thread.run(Thread.java:745)

Investigate long running tests

The purpose of this issue to track improvements in our test suite runtime.

I cleaned up and reviewed the output from one of our test runs. There are a handful of test classes that consume the bulk of execution time. I suspect a few factors are at play we should investigate:

  1. Setting up the same bootstrap over and over again
  2. Using Zookeeper and Kafka where it is not necessary (we could use functional mocks in places where the hard dependency of the service is not fundamental to the test).
  3. Testing the same thing in multiple tests.

I got the output below by looking for any test classes that took longer than 10 seconds to execute, and then from there I filtered out any individual test function less than 1 second.

Running com.salesforce.storm.spout.dynamic.kafka.ConsumerTest
Tests run: 30, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 23.404 sec - in com.salesforce.storm.spout.dynamic.kafka.ConsumerTest

Running com.salesforce.storm.spout.dynamic.buffer.MessageBufferTest
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 23.612 sec - in com.salesforce.storm.spout.dynamic.buffer.MessageBufferTest
testConcurrentModification[0: com.salesforce.storm.spout.dynamic.buffer.FifoBuffer@5bd03f44](com.salesforce.storm.spout.dynamic.buffer.MessageBufferTest)  Time elapsed: 12.288 sec
testConcurrentModification[1: com.salesforce.storm.spout.dynamic.buffer.RoundRobinBuffer@29626d54](com.salesforce.storm.spout.dynamic.buffer.MessageBufferTest)  Time elapsed: 11.322 sec

Running com.salesforce.storm.spout.dynamic.DynamicSpoutTest
Tests run: 32, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 71.741 sec - in com.salesforce.storm.spout.dynamic.DynamicSpoutTest
doTestWithSidelining(com.salesforce.storm.spout.dynamic.DynamicSpoutTest)  Time elapsed: 6.179 sec
doBasicFailTest(com.salesforce.storm.spout.dynamic.DynamicSpoutTest)  Time elapsed: 5.129 sec
testConsumeWithConsumerGroupEvenNumberOfPartitions[0: 0](com.salesforce.storm.spout.dynamic.DynamicSpoutTest)  Time elapsed: 4.543 sec
testConsumeWithConsumerGroupEvenNumberOfPartitions[1: 1](com.salesforce.storm.spout.dynamic.DynamicSpoutTest)  Time elapsed: 4.486 sec
testResumingSpoutWhileSidelinedVirtualSpoutIsActive(com.salesforce.storm.spout.dynamic.DynamicSpoutTest)  Time elapsed: 17.047 sec
testConsumeWithConsumerGroupOddNumberOfPartitions[0: 0](com.salesforce.storm.spout.dynamic.DynamicSpoutTest)  Time elapsed: 5.556 sec
testConsumeWithConsumerGroupOddNumberOfPartitions[1: 1](com.salesforce.storm.spout.dynamic.DynamicSpoutTest)  Time elapsed: 4.572 sec
testResumingForFirehoseVirtualSpout(com.salesforce.storm.spout.dynamic.DynamicSpoutTest)  Time elapsed: 9.443 sec
testReportErrors(com.salesforce.storm.spout.dynamic.DynamicSpoutTest)  Time elapsed: 2.13 sec
doBasicConsumingTest[0: <null>, default](com.salesforce.storm.spout.dynamic.DynamicSpoutTest)  Time elapsed: 2.644 sec
doBasicConsumingTest[1: SpecialStreamId, SpecialStreamId](com.salesforce.storm.spout.dynamic.DynamicSpoutTest)  Time elapsed: 3.158 sec
testDeactivate(com.salesforce.storm.spout.dynamic.DynamicSpoutTest)  Time elapsed: 1.571 sec

Running com.salesforce.storm.spout.dynamic.SpoutCoordinatorTest
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 25.331 sec - in com.salesforce.storm.spout.dynamic.SpoutCoordinatorTest
testAddAndRemoveVirtualSpout(com.salesforce.storm.spout.dynamic.SpoutCoordinatorTest)  Time elapsed: 1.157 sec
testCoordinator(com.salesforce.storm.spout.dynamic.SpoutCoordinatorTest)  Time elapsed: 4.097 sec
testRestartsSpoutMonitorOnDeath(com.salesforce.storm.spout.dynamic.SpoutCoordinatorTest)  Time elapsed: 20.075 sec

Running com.salesforce.storm.spout.dynamic.VirtualSpoutTest
Tests run: 28, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 96.889 sec - in com.salesforce.storm.spout.dynamic.VirtualSpoutTest
testDoesMessageExceedEndingOffsetWithNoEndingStateDefined(com.salesforce.storm.spout.dynamic.VirtualSpoutTest)  Time elapsed: 6.095 sec
testAckWithInvalidMsgIdObject(com.salesforce.storm.spout.dynamic.VirtualSpoutTest)  Time elapsed: 6.046 sec
testUnsubscribeTopicPartition(com.salesforce.storm.spout.dynamic.VirtualSpoutTest)  Time elapsed: 6.029 sec
testNextTupleIgnoresMessagesThatHaveExceededEndingStatePositionSinglePartition(com.salesforce.storm.spout.dynamic.VirtualSpoutTest)  Time elapsed: 6.049 sec
testCloseWithCompletedFlagSetToTrueNoSidelineREquestIdentifier(com.salesforce.storm.spout.dynamic.VirtualSpoutTest)  Time elapsed: 6.038 sec
testCallingOpenTwiceThrowsException(com.salesforce.storm.spout.dynamic.VirtualSpoutTest)  Time elapsed: 6.062 sec
testNextTupleWhenConsumerReturnsNull(com.salesforce.storm.spout.dynamic.VirtualSpoutTest)  Time elapsed: 6.033 sec
testDoesMessageExceedEndingOffsetWhenItDoesNotExceedEndingOffset(com.salesforce.storm.spout.dynamic.VirtualSpoutTest)  Time elapsed: 6.039 sec
testDoesMessageExceedEndingOffsetWhenItEqualsEndingOffset(com.salesforce.storm.spout.dynamic.VirtualSpoutTest)  Time elapsed: 6.063 sec
testCloseWithCompletedFlagSetToFalse(com.salesforce.storm.spout.dynamic.VirtualSpoutTest)  Time elapsed: 6.057 sec
testNextTupleReturnsNullWhenFiltered(com.salesforce.storm.spout.dynamic.VirtualSpoutTest)  Time elapsed: 6.067 sec
testFailWithInvalidMsgIdObject(com.salesforce.storm.spout.dynamic.VirtualSpoutTest)  Time elapsed: 6.033 sec
testCloseWithCompletedFlagSetToTrue(com.salesforce.storm.spout.dynamic.VirtualSpoutTest)  Time elapsed: 6.043 sec
testDoesMessageExceedEndingOffsetWhenItDoesExceedEndingOffset(com.salesforce.storm.spout.dynamic.VirtualSpoutTest)  Time elapsed: 6.065 sec
testDoesMessageExceedEndingOffsetForAnInvalidPartition(com.salesforce.storm.spout.dynamic.VirtualSpoutTest)  Time elapsed: 6.034 sec
testNextTuple(com.salesforce.storm.spout.dynamic.VirtualSpoutTest)  Time elapsed: 6.053 sec

Reconcile DelegateSpout & VirtualSpout nomenclature

We're all over the place with the naming around this. I actually am inclined to think that we should make the following name changes...

DelegateSpout => VirtualSpout
VirtualSpout => DefaultVirtualSpout

Support consumer state that isn't an integer

Right now the PersistenceAdapter interface declares these two methods for interacting with offsets:

    void persistConsumerState(final String consumerId, final int partitionId, final long offset);

    Long retrieveConsumerState(final String consumerId, final int partitionId);

We made the assumption that an offset, or rather the state of the consumer, is always an integer, but as we're finding in other projects this doesn't always hold true.

To support more complex state tracking we need to make a shift here.

State is still going to need an offset for offset tracking, but I'm not starting to wonder if we need to create our own offset for the purposes of things like the PartitionOffsetManager. Basically, we could increment things coming off of the spout instance and track that as an ordered offset, but persist more complex data. Or maybe we leave this up to the specific consumer to implement if they're using a non-numeric piece of state.

I'm spit-balling here, but fundamentally we need to at least support a string for consumer state - probably should just consider a bag of bytes or something.

Let's come up with a design and figure out how to refactor the necessary parts of the framework for 0.10 milestone.

RatioMessageBuffer NPEs on Spout deploy

5:44:17.913 ERROR o.a.s.d.executor [Thread-20-example-executor[3 3]]: 
java.lang.NullPointerException: null
	at com.salesforce.storm.spout.dynamic.buffer.RatioMessageBuffer$NextVirtualSpoutIdGenerator.nextVirtualSpoutId(RatioMessageBuffer.java:318) ~[dynamic-spout-0.10-SNAPSHOT.jar:?]
	at com.salesforce.storm.spout.dynamic.buffer.RatioMessageBuffer.poll(RatioMessageBuffer.java:222) ~[dynamic-spout-0.10-SNAPSHOT.jar:?]
	at com.salesforce.storm.spout.dynamic.MessageBus.nextMessage(MessageBus.java:83) ~[dynamic-spout-0.10-SNAPSHOT.jar:?]
	at com.salesforce.storm.spout.dynamic.DynamicSpout.nextTuple(DynamicSpout.java:234) ~[dynamic-spout-0.10-SNAPSHOT.jar:?]
	at org.apache.storm.daemon.executor$fn__4962$fn__4977$fn__5008.invoke(executor.clj:646) ~[storm-core-1.1.1.jar:1.1.1]
	at org.apache.storm.util$async_loop$fn__557.invoke(util.clj:484) [storm-core-1.1.1.jar:1.1.1]
	at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_144]

Test / Verify what happens when Kafka Brokers become unavailable

Test Procedure

  • Start a test topology against a kafka cluster
  • Shutdown one or more brokers in the cluster
  • Determine how the spout responds.
  • Start broker(s) back up, determine how the spout responds.

Expected Behavior
It's expected that the spout will gracefully degrade, and once the broker becomes available again it correct resumes consuming. If this is NOT what happens, lodge additional issues/work items to correct the behavior.

Upgrade to Storm 1.2 metrics methods

Storm 1.1 and lower offered a method on TopologyContext called registerMetrics() (see https://github.com/apache/storm/blob/1.1.x-branch/storm-core/src/jvm/org/apache/storm/task/TopologyContext.java). In Storm 1.2 these methods were deprecated and registerTimer(), registerHistogram(), registerMeter(), registerCounter() and registerGauge() were added (see https://github.com/apache/storm/blob/v1.2.2/storm-core/src/jvm/org/apache/storm/task/TopologyContext.java#L396-L418). Within this library we represent several metric types via the MetricsRecorder interface, specifically countBy(), assignValue() and recordTimer() along with a set of helpers for each. Our current metrics methods map to three of these, but we have gaps for meters (measures mean throughput and one-, five-, and fifteen-minute exponentially-weighted moving average throughputs) and histograms (calculates the distribution of a value).

In the short term (0.10) we can swap StormRecorder (the implementation of MetricsRecorder that leverages TopologyContext to call registerMetrics() to use these new methods. This does then impose a hard requirement on Storm 1.2 because the new metrics methods did not exist before that release. Currently 0.10 requires 1.2, but it should function fine with older version - this would not be the case with a metrics change. Longer term do we need to consider exposing the other two metrics methods? As it stands today we don't have a use case currently, but we do pass the MetricsRecorder instance downstream to many classes that can be overridden through third party extension.

Null Pointer Exceptions in SidelineSpoutHandler and Consumer

My team recently started and stopped some sidelines and then found these Null Pointer Exceptions in our logs.

2017-11-29 16:42:31.793 [Curator-PathChildrenCache-0] o.a.c.f.r.c.PathChildrenCache [ERROR]  
java.lang.NullPointerException: null
	at com.salesforce.storm.spout.sideline.handler.SidelineSpoutHandler.stopSidelining(SidelineSpoutHandler.java:305) ~[stormjar.jar:?]
	at com.salesforce.storm.spout.sideline.SpoutTriggerProxy.stopSidelining(SpoutTriggerProxy.java:65) ~[stormjar.jar:?]
[redacted]
[redacted]
[redacted]
	at org.apache.curator.framework.recipes.cache.PathChildrenCache$5.apply(PathChildrenCache.java:522) [stormjar.jar:?]
	at org.apache.curator.framework.recipes.cache.PathChildrenCache$5.apply(PathChildrenCache.java:516) [stormjar.jar:?]
	at org.apache.curator.framework.listen.ListenerContainer$1.run(ListenerContainer.java:93) [stormjar.jar:?]
	at org.apache.curator.shaded.com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:297) [stormjar.jar:?]
	at org.apache.curator.framework.listen.ListenerContainer.forEach(ListenerContainer.java:85) [stormjar.jar:?]
	at org.apache.curator.framework.recipes.cache.PathChildrenCache.callListeners(PathChildrenCache.java:514) [stormjar.jar:?]
	at org.apache.curator.framework.recipes.cache.EventOperation.invoke(EventOperation.java:35) [stormjar.jar:?]
	at org.apache.curator.framework.recipes.cache.PathChildrenCache$9.run(PathChildrenCache.java:773) [stormjar.jar:?]
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_102]
	at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_102]
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_102]
	at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_102]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_102]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_102]
	at java.lang.Thread.run(Thread.java:745) [?:1.8.0_102],

and

2017-11-29 15:12:04.880 [Thread-498-EventStreamSpout-executor[98 98]] o.a.s.d.executor [ERROR]  
java.lang.NullPointerException: null
	at com.salesforce.storm.spout.dynamic.kafka.Consumer.getKafkaConsumer(Consumer.java:155) ~[stormjar.jar:?]
	at com.salesforce.storm.spout.dynamic.kafka.Consumer.metrics(Consumer.java:656) ~[stormjar.jar:?]
	at com.salesforce.storm.spout.dynamic.kafka.Consumer.getMaxLag(Consumer.java:663) ~[stormjar.jar:?]
	at com.salesforce.storm.spout.dynamic.VirtualSpout.getMaxLag(VirtualSpout.java:572) ~[stormjar.jar:?]
	at com.salesforce.storm.spout.dynamic.coordinator.SpoutPartitionProgressMonitor.reportStatus(SpoutPartitionProgressMonitor.java:75) ~[stormjar.jar:?]
	at com.salesforce.storm.spout.dynamic.coordinator.SpoutMonitor.reportStatus(SpoutMonitor.java:386) ~[stormjar.jar:?]
	at com.salesforce.storm.spout.dynamic.coordinator.SpoutMonitor.run(SpoutMonitor.java:239) ~[stormjar.jar:?]
	at java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1626) ~[?:1.8.0_102]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[?:1.8.0_102]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[?:1.8.0_102]
	at java.lang.Thread.run(Thread.java:745) [?:1.8.0_102],

I don't know anything about them but I imagine they could either be safely handled or a more specific/friendly exception thrown in their stead.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.