Giter Club home page Giter Club logo

pykafka's Introduction

image

PyKafka

image

PyKafka is a programmer-friendly Kafka client for Python. It includes Python implementations of Kafka producers and consumers, which are optionally backed by a C extension built on librdkafka. It runs under Python 2.7+, Python 3.4+, and PyPy, and supports versions of Kafka 0.8.2 and newer.

PyKafka's primary goal is to provide a similar level of abstraction to the JVM Kafka client using idioms familiar to Python programmers and exposing the most Pythonic API possible.

You can install PyKafka from PyPI with

$ pip install pykafka

or from conda-forge with

$ conda install -c conda-forge pykafka

Full documentation and usage examples for PyKafka can be found on readthedocs.

You can install PyKafka for local development and testing by cloning this repository and running

$ python setup.py develop

Getting Started

Assuming you have at least one Kafka instance running on localhost, you can use PyKafka to connect to it.

python

>>> from pykafka import KafkaClient >>> client = KafkaClient(hosts="127.0.0.1:9092,127.0.0.1:9093,...")

Or, for a TLS connection, you might write (and also see SslConfig docs for further details):

python

>>> from pykafka import KafkaClient, SslConfig >>> config = SslConfig(cafile='/your/ca.cert', ... certfile='/your/client.cert', # optional ... keyfile='/your/client.key', # optional ... password='unlock my client key please') # optional >>> client = KafkaClient(hosts="127.0.0.1:<ssl-port>,...", ... ssl_config=config)

If the cluster you've connected to has any topics defined on it, you can list them with:

python

>>> client.topics >>> topic = client.topics['my.test']

Once you've got a Topic, you can create a Producer for it and start producing messages.

python

>>> with topic.get_sync_producer() as producer: ... for i in range(4): ... producer.produce('test message ' + str(i ** 2))

The example above would produce to kafka synchronously - the call only returns after we have confirmation that the message made it to the cluster.

To achieve higher throughput, we recommend using the Producer in asynchronous mode, so that produce() calls will return immediately and the producer may opt to send messages in larger batches. The Producer collects produced messages in an internal queue for linger_ms before sending each batch. This delay can be removed or changed at the expense of efficiency with linger_ms, min_queued_messages, and other keyword arguments (see readthedocs). You can still obtain delivery confirmation for messages, through a queue interface which can be enabled by setting delivery_reports=True. Here's a rough usage example:

python

>>> with topic.get_producer(delivery_reports=True) as producer: ... count = 0 ... while True: ... count += 1 ... producer.produce('test msg', partition_key='{}'.format(count)) ... if count % 10 ** 5 == 0: # adjust this or bring lots of RAM ;) ... while True: ... try: ... msg, exc = producer.get_delivery_report(block=False) ... if exc is not None: ... print 'Failed to deliver msg {}: {}'.format( ... msg.partition_key, repr(exc)) ... else: ... print 'Successfully delivered msg {}'.format( ... msg.partition_key) ... except Queue.Empty: ... break

Note that the delivery report queue is thread-local: it will only serve reports for messages which were produced from the current thread. Also, if you're using delivery_reports=True, failing to consume the delivery report queue will cause PyKafka's memory usage to grow unbounded.

You can also consume messages from this topic using a Consumer instance.

python

>>> consumer = topic.get_simple_consumer() >>> for message in consumer: ... if message is not None: ... print message.offset, message.value 0 test message 0 1 test message 1 2 test message 4 3 test message 9

This SimpleConsumer doesn't scale - if you have two SimpleConsumers consuming the same topic, they will receive duplicate messages. To get around this, you can use the BalancedConsumer.

python

>>> balanced_consumer = topic.get_balanced_consumer( ... consumer_group='testgroup', ... auto_commit_enable=True, ... zookeeper_connect='myZkClusterNode1.com:2181,myZkClusterNode2.com:2181/myZkChroot' ... )

You can have as many BalancedConsumer instances consuming a topic as that topic has partitions. If they are all connected to the same zookeeper instance, they will communicate with it to automatically balance the partitions between themselves. The partition assignment strategy used by the BalancedConsumer is the "range" strategy by default. The strategy is switchable via the membership_protocol keyword argument, and can be either an object exposed by pykafka.membershipprotocol or a custom instance of pykafka.membershipprotocol.GroupMembershipProtocol.

You can also use the Kafka 0.9 Group Membership API with the managed keyword argument on get_balanced_consumer.

Using the librdkafka extension

PyKafka includes a C extension that makes use of librdkafka to speed up producer and consumer operation.

To ensure the C extension is compiled, set environment variable RDKAFKA_INSTALL=system during pip install or setup.py, i.e. RDKAFKA_INSTALL=system pip install pykafka. The setup will fail if C extension is not compiled. Oppositely, if RDKAFKA_INSTALL='', this explicitly specifies that the C extension should not be compiled. The current default behavior is to compile the extension but will not fail the setup if compilation fails.

PyKafka requires librdkafka v0.9.1+. Some system package managers may not have up-to-date versions. To use the librdkafka extension, you need to make sure the header files and shared library are somewhere where python can find them, both when you build the extension (which is taken care of by setup.py develop) and at run time. Typically, this means that you need to either install librdkafka in a place conventional for your system, or declare C_INCLUDE_PATH, LIBRARY_PATH, and LD_LIBRARY_PATH in your shell environment to point to the installation location of the librdkafka shared objects. You can find this location with locate librdkafka.so.

After that, all that's needed is that you pass an extra parameter use_rdkafka=True to topic.get_producer(), topic.get_simple_consumer(), or topic.get_balanced_consumer(). Note that some configuration options may have different optimal values; it may be worthwhile to consult librdkafka's configuration notes for this.

Operational Tools

PyKafka includes a small collection of CLI tools that can help with common tasks related to the administration of a Kafka cluster, including offset and lag monitoring and topic inspection. The full, up-to-date interface for these tools can be found by running

sh

$ python cli/kafka_tools.py --help

or after installing PyKafka via setuptools or pip:

sh

$ kafka-tools --help

PyKafka or kafka-python?

These are two different projects. See the discussion here for comparisons between the two projects.

Contributing

If you're interested in contributing code to PyKafka, a good place to start is the "help wanted" issue tag. We also recommend taking a look at the contribution guide.

Support

If you need help using PyKafka, there are a bunch of resources available. For usage questions or common recipes, check out the StackOverflow tag. The Google Group can be useful for more in-depth questions or inquiries you'd like to send directly to the PyKafka maintainers. If you believe you've found a bug in PyKafka, please open a github issue after reading the contribution guide.

pykafka's People

Contributors

aldraco avatar bootandy avatar carsonip avatar dan-blanchard avatar dependabot-preview[bot] avatar dependabot-support avatar emmettbutler avatar fxsjy avatar jeffwidman avatar jihengwang avatar johnistan avatar jonsource avatar kbourgoin avatar maparent avatar messense avatar mwhooker avatar nk412 avatar rduplain avatar saich avatar sontek avatar sorcky avatar steffann avatar tanaysoni avatar thedrow avatar tiriplicamihai avatar tkaemming avatar vortec avatar yungchin avatar zoidyzoidzoid avatar zware avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pykafka's Issues

resolve test_empty_list error

stack trace here:

https://gist.github.com/9476a289c48ebc309020

Looks like it could be a problem with kazoo.

The bug is that the test_empty_list test sometimes stalls. This has been responsible for almost every timeout in the concurrent_fetch branch in travis. However, I've had a hard time reproducing it, especially on osx. This bt is from an ubuntu 4/12 x86 image.

better exception when writes fail due to a lack of available partitions

======================================================================
ERROR: test_service_additions_work (tests.chattr.test_integration.TestKafkaTpoicIntegration)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/jeff/Code/chattr.io/client/tests/chattr/test_integration.py", line 53, in test_service_additions_work
    returned = self.send_receive(self.valid_message)
File "/Users/jeff/Code/chattr.io/client/tests/chattr/test_integration.py", line 40, in send_receive
    self.channel.send(message)
File "/Users/jeff/Code/chattr.io/client/src/chattr/io/models.py", line 90, in send
    return self.__delegate('send', self.translator.encode(prepped), block)
File "/Users/jeff/Code/chattr.io/client/src/chattr/io/models.py", line 63, in __delegate
    return getattr(self.__engine, method)(self.name, *args, **kwargs)
File "/Users/jeff/Code/chattr.io/client/src/chattr/io/engines/kafka.py", line 29, in send
    self.topics[channel_name].publish(message)
File "/Users/jeff/Code/samsa/samsa/topics.py", line 81, in publish
    partition = random.choice(list(self.partitions))
File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/random.py", line 261, in choice
    return seq[int(self.random() * len(seq))]  # raises IndexError if seq is empty
IndexError: list index out of range

this is confusing to end users

Add kafka log handler

Should be able to register a log handler that will send log messages through kafka.

Dead lock when consumer timeout is None and no messages

Fetch thread fires once and returns. If it finds nothing, we can still block indefinitely.

Possible solution is to loop fetches with exponential back-off on 0 results. (see samsa/consumer/partitions.py:OwnedPartition._fetch

Replace config with kwargs

Let's get rid of samsa/config.py

It's a bad idea for python.

Instead, let's move the values to the appropriate method signatures with the given defaults.

fix consumer iterator

We want to achieve a few things with this

  1. decouple fetching messages from consuming them (i.e. so we can asynchronously grab more)
  2. We want the iterator to be infinite.

unexpected "sample larger than population" error

(2012-10-09 14:25:19,751 [INFO] extensions) Subscribing to Channel(realertime.all, engine=<chattr.io.engines.kafka.KafkaEngine object at 0x107f00d50>, versions={})
Traceback (most recent call last):
  File "/Users/adam/.virtualenvs/realertime/lib/python2.7/site-packages/gevent/greenlet.py", line 390, in run
    result = self._run(*self.args, **self.kwargs)
  File "/Users/adam/Documents/sandbox/disqus-service/src/disqus/service/concurrency.py", line 33, in _run
    self.target(*self.args, **self.kwargs)
  File "/Users/adam/Documents/sandbox/disqus-service/src/disqus/service/application/extensions.py", line 87, in subscribe
    map(self.__handle_message, self.listen_to)
  File "/Users/adam/Documents/sandbox/chattr.io/client/src/chattr/io/models.py", line 72, in next
    raw_data = self.__delegate('next', block)
  File "/Users/adam/Documents/sandbox/chattr.io/client/src/chattr/io/models.py", line 63, in __delegate
    return getattr(self.__engine, method)(self.name, *args, **kwargs)
  File "/Users/adam/Documents/sandbox/chattr.io/client/src/chattr/io/engines/kafka.py", line 31, in next
    return self.iterator(channel_name).next()                                                                                                  
  File "/Users/adam/.virtualenvs/realertime/lib/python2.7/site-packages/samsa/consumer/consumer.py", line 142, in __iter__
    msg = self.next_message(self.config['consumer_timeout'])
  File "/Users/adam/.virtualenvs/realertime/lib/python2.7/site-packages/samsa/consumer/consumer.py", line 152, in next_message
    return random.sample(self.partitions, 1)[0].next_message(timeout)
  File "/usr/local/Cellar/python/2.7.3/Frameworks/Python.framework/Versions/2.7/lib/python2.7/random.py", line 320, in sample
    raise ValueError("sample larger than population")
ValueError: sample larger than population
<GeventThread at 0x107ed9410> failed with ValueError

calling fetch at the last known offset raises protocol error

The test which causes this is here:

https://github.com/disqus/samsa/blob/master/tests/samsa/test_consumer.py#L193

This is an example of the error:

======================================================================
ERROR: test_consumes (tests.samsa.test_consumer.TestConsumerIntegration)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/Users/mwhooker/dev/samsa/tests/samsa/test_consumer.py", line 193, in test_consumes
    self.assertEquals(list(consumer), [])
  File "/Users/mwhooker/dev/samsa/samsa/consumer.py", line 56, in fetch
    for last_offset, msg in messages:
  File "/Users/mwhooker/dev/samsa/samsa/client.py", line 74, in decode_messages
    yield offset, decode_message(value)
  File "/Users/mwhooker/dev/samsa/samsa/client.py", line 52, in decode_message
    payload = value.unwrap(4)
  File "/Users/mwhooker/dev/samsa/samsa/utils/structuredio.py", line 67, in unwrap
    payload.write(self.unframe(size, validate))
  File "/Users/mwhooker/dev/samsa/samsa/utils/structuredio.py", line 56, in unframe
    raise ValueError('Payload length does not match length specified in header')
ValueError: Payload length does not match length specified in header
-------------------- >> begin captured logging << --------------------

I added some debug to StructuredBytesIO.unframe so it now looks like

def unframe(self, size, validate=True):
        length = self.unpack(size)
        value = self.read(int(length))
        print "val: ", len(value)
        print "len: ",  length
        if validate and len(value) != length:
            raise ValueError('Payload length does not match length specified in header')
        return value

and this is the output:

val:  16
len:  16
val:  5
len:  1819045664

odd

add license headers and file

we can use this script

#!/bin/bash
# based on http://stackoverflow.com/a/151690/105571


find . -name  "*.py" | while read i
do
  if ! grep -q Copyright $i
  then
    echo '"""' > $i.new
    cat LICENSE | tail -13 >> $i.new
    echo '"""' >> $i.new
    echo '' >> $i.new
    cat $i >> $i.new
    mv $i.new $i
  fi
done

unexpected "Couldn't acquire partitions" error

(2012-10-09 14:37:15,135 [INFO] extensions) Subscribing to Channel(realertime.all, engine=<chattr.io.engines.kafka.KafkaEngine object at 0x102872f90>, versions={})
Traceback (most recent call last):
  File "/Users/adam/.virtualenvs/realertime/lib/python2.7/site-packages/gevent/greenlet.py", line 390, in run
    result = self._run(*self.args, **self.kwargs)
  File "/Users/adam/Documents/sandbox/disqus-service/src/disqus/service/concurrency.py", line 33, in _run
    self.target(*self.args, **self.kwargs)
  File "/Users/adam/Documents/sandbox/disqus-service/src/disqus/service/application/extensions.py", line 87, in subscribe
    map(self.__handle_message, self.listen_to)
  File "/Users/adam/Documents/sandbox/chattr.io/client/src/chattr/io/models.py", line 72, in next
    raw_data = self.__delegate('next', block)
  File "/Users/adam/Documents/sandbox/chattr.io/client/src/chattr/io/models.py", line 63, in __delegate
    return getattr(self.__engine, method)(self.name, *args, **kwargs)
  File "/Users/adam/Documents/sandbox/chattr.io/client/src/chattr/io/engines/kafka.py", line 31, in next
    return self.iterator(channel_name).next()
  File "/Users/adam/Documents/sandbox/chattr.io/client/src/chattr/io/engines/kafka.py", line 21, in iterator
    self.consumer_group_id
  File "/Users/adam/.virtualenvs/realertime/lib/python2.7/site-packages/samsa/topics.py", line 108, in subscribe
    return Consumer(self.cluster, self, group)
  File "/Users/adam/.virtualenvs/realertime/lib/python2.7/site-packages/samsa/consumer/consumer.py", line 65, in __init__
    self._rebalance()
  File "/Users/adam/.virtualenvs/realertime/lib/python2.7/site-packages/samsa/consumer/consumer.py", line 135, in _rebalance
    raise SamsaException("Couldn't acquire partitions.")
SamsaException: Couldn't acquire partitions.
<GeventThread at 0x10284b410> failed with SamsaException

let kafka be instrumented

have instrumentation interface allow different types of monitoring systems understand what is going on.

the first implementer will be a benchmark which will just say how well it consumes and produces millions of messages.

prevent integration tests from starting before zookeeper cluster and kafka broker are ready

probably could achieve by sending offsets requests until the we get a valid response from the socket

Traceback (most recent call last):
  File "/usr/local/pypy/lib-python/2.7/threading.py", line 552, in __bootstrap_inner
    self.run()
  File "/usr/local/pypy/lib-python/2.7/threading.py", line 505, in run
    self.__target(*self.__args, **self.__kwargs)
  File "/home/travis/builds/disqus/samsa/samsa/consumer/partitions.py", line 128, in _fetch
    size
  File "/home/travis/builds/disqus/samsa/samsa/partitions.py", line 227, in fetch
    return self.broker.client.fetch(self.topic.name, self.number, offset,
  File "/home/travis/builds/disqus/samsa/samsa/brokers.py", line 186, in client
    port=self.port)
  File "/home/travis/builds/disqus/samsa/samsa/client.py", line 396, in __init__
    self.connect()
  File "/home/travis/builds/disqus/samsa/samsa/client.py", line 399, in connect
    self.connection.connect()
  File "/home/travis/builds/disqus/samsa/samsa/client.py", line 334, in connect
    timeout=self.timeout
  File "/usr/local/pypy/lib-python/2.7/socket.py", line 616, in create_connection
    raise err
error: [Errno 111] Connection refused

FAIL
Test that we can consume messages from kafka. ... Exception in thread Thread-141:
Traceback (most recent call last):
  File "/usr/local/pypy/lib-python/2.7/threading.py", line 552, in __bootstrap_inner
    self.run()
  File "/usr/local/pypy/lib-python/2.7/threading.py", line 505, in run
    self.__target(*self.__args, **self.__kwargs)
  File "/home/travis/builds/disqus/samsa/samsa/consumer/partitions.py", line 128, in _fetch
    size
  File "/home/travis/builds/disqus/samsa/samsa/partitions.py", line 228, in fetch
    size)
  File "/home/travis/builds/disqus/samsa/samsa/client.py", line 489, in fetch
    return decode_messages(response.get(), from_offset=offset)
  File "/home/travis/builds/disqus/samsa/samsa/handlers.py", line 50, in get
    raise self.error
error: [Errno 104] Connection reset by peer

robust producer

What happens if a write to a broker fails:

stream of conscious

  • we should have super short timeouts so we don't block
  • if there's a failure we should be able to tell the partitioner if we want to
    • move on to next broker.
      • Mark broker as down for the client. Add back in after timeout. (see my cluster code).
    • or immediately raise error.

fix transient travis error

185FAIL: Test that message offsets are persisted to ZK.
186----------------------------------------------------------------------
187Traceback (most recent call last):
188 File "/home/vagrant/virtualenv/python2.6/lib/python2.6/site-packages/mock.py", line 1190, in patched
189 return func(_args, *_keywargs)
190 File "/home/vagrant/builds/disqus/samsa/tests/samsa/test_consumer.py", line 159, in test_commits_offsets
191 , 5
192 File "/home/vagrant/builds/disqus/samsa/samsa/test/case.py", line 56, in assertPassesWithMultipleAttempts
193 fn()
194 File "/home/vagrant/builds/disqus/samsa/tests/samsa/test_consumer.py", line 158, in
195 lambda: self.assertEquals(c.next_message(10), msgs[0].payload)
196AssertionError: None != '111'
197 "None != '111'" = '%s != %s' % (safe_repr(None), safe_repr('111'))
198 "None != '111'" = self._formatMessage("None != '111'", "None != '111'")
199>> raise self.failureException("None != '111'")
200

add streaming reads

revisit #3 for streaming responses now that handlers keep an ordering of responses so we don't have to worry about interleaved requests trying to read from the same socket

this will allow faster processing of messages on startup of a fetch/multifetch request

fix pep8 issues

./samsa/brokers.py:25:5: E128 continuation line under-indented for visual indent
./samsa/brokers.py:51:13: E128 continuation line under-indented for visual indent
./samsa/brokers.py:55:17: E128 continuation line under-indented for visual indent
./samsa/brokers.py:58:17: E128 continuation line under-indented for visual indent
./samsa/client.py:103:13: E128 continuation line under-indented for visual indent
./samsa/client.py:129:21: E128 continuation line under-indented for visual indent
./samsa/client.py:374:13: E128 continuation line under-indented for visual indent
./samsa/client.py:576:17: E128 continuation line under-indented for visual indent
./samsa/exceptions.py:61:5: E128 continuation line under-indented for visual indent
./samsa/partitions.py:26:5: E128 continuation line under-indented for visual indent
./samsa/partitions.py:55:17: E128 continuation line under-indented for visual indent
./samsa/partitions.py:103:13: E128 continuation line under-indented for visual indent
./samsa/partitions.py:122:13: E128 continuation line under-indented for visual indent
./samsa/partitions.py:126:13: E128 continuation line under-indented for visual indent
./samsa/partitions.py:182:21: E128 continuation line under-indented for visual indent
./samsa/partitions.py:224:13: E128 continuation line under-indented for visual indent
./samsa/partitions.py:228:13: E128 continuation line under-indented for visual indent
./samsa/topics.py:101:17: E128 continuation line under-indented for visual indent
./samsa/consumer/consumer.py:27:80: E501 line too long (90 > 79 characters)
./samsa/consumer/consumer.py:79:80: E501 line too long (85 > 79 characters)./samsa/consumer/consumer.py:80:80: E501 line too long (84 > 79 characters)
./samsa/consumer/consumer.py:80:17: E128 continuation line under-indented for visual indent
./samsa/consumer/consumer.py:125:80: E501 line too long (83 > 79 characters)
./samsa/consumer/consumer.py:126:80: E501 line too long (80 > 79 characters)
./samsa/consumer/consumer.py:133:25: E128 continuation line under-indented for visual indent
./samsa/consumer/consumer.py:140:5: E303 too many blank lines (2)
./samsa/consumer/consumer.py:146:17: E126 continuation line over-indented for hanging indent
./samsa/consumer/consumer.py:147:17: E122 continuation line missing indentation or outdented
./samsa/consumer/consumer.py:148:17: E122 continuation line missing indentation or outdented
./samsa/consumer/consumer.py:149:13: E123 closing bracket does not match indentation of opening bracket's line
./samsa/consumer/partitions.py:118:80: E501 line too long (82 > 79 characters)
./samsa/test/case.py:9:13: E128 continuation line under-indented for visual indent
./samsa/test/case.py:26:25: E128 continuation line under-indented for visual indent
./samsa/test/integration.py:63:9: E128 continuation line under-indented for visual indent
./samsa/test/integration.py:120:17: E126 continuation line over-indented for hanging indent
./samsa/test/integration.py:148:13: E128 continuation line under-indented for visual indent
./samsa/test/integration.py:174:17: E128 continuation line under-indented for visual indent
./samsa/test/integration.py:217:17: E128 continuation line under-indented for visual indent
./samsa/test/integration.py:227:17: E128 continuation line under-indented for visual indent
./samsa/test/integration.py:259:13: E128 continuation line under-indented for visual indent
./samsa/test/integration.py:286:13: E128 continuation line under-indented for visual indent
./samsa/test/integration.py:297:13: E128 continuation line under-indented for visual indent
./samsa/test/integration.py:314:13: E128 continuation line under-indented for visual indent
./samsa/utils/__init__.py:35:13: E128 continuation line under-indented for visual indent
./samsa/utils/delayedconfig.py:34:17: E128 continuation line under-indented for visual indent
./samsa/utils/structuredio.py:55:17: E128 continuation line under-indented for visual indent
./tests/samsa/test_config.py:33:80: E501 line too long (92 > 79 characters)
./tests/samsa/test_consumer.py:42:25: E221 multiple spaces before operator
./tests/samsa/test_consumer.py:232:80: E501 line too long (80 > 79 characters)
./tests/samsa/test_consumer.py:235:5: E303 too many blank lines (2)
./tests/samsa/brokers/tests.py:46:13: E128 continuation line under-indented for visual indent
./tests/samsa/client/tests.py:62:17: E128 continuation line under-indented for visual indent
./tests/samsa/client/tests.py:74:13: E128 continuation line under-indented for visual indent
./tests/samsa/client/tests.py:85:17: E128 continuation line under-indented for visual indent
./tests/samsa/client/tests.py:99:13: E128 continuation line under-indented for visual indent
./tests/samsa/client/tests.py:188:13: E128 continuation line under-indented for visual indent
./tests/samsa/client/tests.py:265:21: E128 continuation line under-indented for visual indent
./tests/samsa/utils/delayedconfig/tests.py:21:5: E128 continuation line under-indented for visual indent

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.