Giter Club home page Giter Club logo

mosquitto's Introduction

Eclipse Mosquitto

Mosquitto is an open source implementation of a server for version 5.0, 3.1.1, and 3.1 of the MQTT protocol. It also includes a C and C++ client library, and the mosquitto_pub and mosquitto_sub utilities for publishing and subscribing.

Links

See the following links for more information on MQTT:

Mosquitto project information is available at the following locations:

There is also a public test server available at https://test.mosquitto.org/

Installing

See https://mosquitto.org/download/ for details on installing binaries for various platforms.

Quick start

If you have installed a binary package the broker should have been started automatically. If not, it can be started with a basic configuration:

mosquitto

Then use mosquitto_sub to subscribe to a topic:

mosquitto_sub -t 'test/topic' -v

And to publish a message:

mosquitto_pub -t 'test/topic' -m 'hello world'

Documentation

Documentation for the broker, clients and client library API can be found in the man pages, which are available online at https://mosquitto.org/man/. There are also pages with an introduction to the features of MQTT, the mosquitto_passwd utility for dealing with username/passwords, and a description of the configuration file options available for the broker.

Detailed client library API documentation can be found at https://mosquitto.org/api/

Building from source

To build from source the recommended route for end users is to download the archive from https://mosquitto.org/download/.

On Windows and Mac, use cmake to build. On other platforms, just run make to build. For Windows, see also README-windows.md.

If you are building from the git repository then the documentation will not already be built. Use make binary to skip building the man pages, or install docbook-xsl on Debian/Ubuntu systems.

Build Dependencies

  • c-ares (libc-ares-dev on Debian based systems) - only when compiled with make WITH_SRV=yes
  • cJSON - for client JSON output support. Disable with make WITH_CJSON=no Auto detected with CMake.
  • libwebsockets (libwebsockets-dev) - enable with make WITH_WEBSOCKETS=yes
  • openssl (libssl-dev on Debian based systems) - disable with make WITH_TLS=no
  • pthreads - for client library thread support. This is required to support the mosquitto_loop_start() and mosquitto_loop_stop() functions. If compiled without pthread support, the library isn't guaranteed to be thread safe.
  • uthash / utlist - bundled versions of these headers are provided, disable their use with make WITH_BUNDLED_DEPS=no
  • xsltproc (xsltproc and docbook-xsl on Debian based systems) - only needed when building from git sources - disable with make WITH_DOCS=no

Equivalent options for enabling/disabling features are available when using the CMake build.

Credits

Mosquitto was written by Roger Light [email protected]

mosquitto's People

Contributors

8none1 avatar abiliojr avatar anmolsarma avatar daedaluz avatar dbeinder avatar etihwnad avatar fornwall avatar grom-42 avatar hallot avatar hardillb avatar howjmay avatar karlp avatar kumar003vinod avatar lancechentw avatar locutusofborg avatar majekw avatar mhei avatar michaeliu avatar nicopernas avatar onip avatar pierref avatar podsvirov avatar ralight avatar raspopov avatar sigmundvik avatar stoupa-cz avatar toast-uz avatar tucic avatar vidarino avatar wollud1969 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mosquitto's Issues

Make persistence file dynamically updateable

migrated from Bugzilla #470044
status NEW severity enhancement in component Mosquitto for 1.4
Reported in version unspecified on platform All
Assigned to: Roger Light

On 2015-06-12 04:59:16 -0400, Roger Light wrote:

It would be nice if the persistence file could be updated as necessary, not in a single periodic write.

Issues with a dynamic approach would include that messages might appear out of order and the need to defragment the file.

Sort include_dir config files before reading

migrated from Bugzilla #485588
status NEW severity normal in component Mosquitto for 1.4
Reported in version unspecified on platform All
Assigned to: Roger Light

On 2016-01-11 17:57:52 -0500, Roger Light wrote:

To make behaviour deterministic, especially from platform to platform.

Client connected to mosquitto bridge on Android recieves same message twice when published by a Mosca Broker on cloud

migrated from Bugzilla #476856
status UNCONFIRMED severity normal in component Mosquitto for 1.4
Reported in version unspecified on platform Other
Assigned to: Roger Light

On 2015-09-08 05:41:59 -0400, arvind sesha wrote:

I have a strange issue with a Mosquitto bridge on my android device connecting to Mosca on cloud.

The scenario is that I have a MQTT client connected to the Mosquitto bridge on the Android device and another client directly connected to the Mosca broker. When a message published through the mosca broker, the client directly connected to Mosca receives it only once [which is correct], but the client connected through the mosquitto bridge receives the same msg twice. The QOS is set to 1 at the bridge configuration and at the clients.

The mosquitto bridge configuration is as below

connection bridgetocloud
address Mosca-balancer-23232322323..elb.amazonaws.com:1883
topic mqtt/2/events/# both 1
bridge_attempt_unsubscribe false
keepalive_interval 30
start_type automatic
restart_timeout 1
try_private true

On 2015-09-16 12:46:28 -0400, Roger Light wrote:

Is it possible that you have previously set a topic subscription that is now giving you duplicate messages?

Try setting the bridge clean session argument to true and see if the problem remains.

Fix daemonisation

migrated from Bugzilla #485589
status RESOLVED severity normal in component Mosquitto for 1.4
Reported in version unspecified on platform PC
Assigned to: Roger Light

On 2016-01-11 17:59:10 -0500, Roger Light wrote:

Should detach from stdout/stderr for example. See http://www.itp.uzh.ch/~dpotter/howto/daemonize

On 2016-02-11 16:50:31 -0500, Roger Light wrote:

Fixed in the fixes branch.

Add option to use full certificate subject as username

migrated from Bugzilla #469467
status RESOLVED severity enhancement in component Mosquitto for 1.4
Reported in version unspecified on platform Macintosh
Assigned to: Roger Light

Original attachment names and IDs:

On 2015-06-05 07:39:55 -0400, Fabian Ruff wrote:

Created attachment 254145
use_subject_as_username option patch

As discussed on the mailing list this is a feature request to add an option to use the full certificate subject as the username in the broker.

Attached is a patch which adds this feature as option 'use_subject_as_username'
I'm not sure if this is already sufficient and would like some feedback on it. At minimum it can serve as a starting point for future work.

Cheers,
Fabian

On 2015-06-12 18:23:55 -0400, Roger Light wrote:

Thanks very much, I've checked the patch and it looks good, aside from some relatively minor points (check return values!). I've committed it and included some fixes/documentation.

On 2015-06-13 06:38:59 -0400, Fabian Ruff wrote:

Weeh! Thanks a lot for merging this right away.

Any rough idea when this will ship in a release?

Kind regards,
Fabian

memory leak in mqtt3_sub_add ?

migrated from Bugzilla #470253
status RESOLVED severity normal in component Mosquitto for 1.4
Reported in version 1.4 on platform All
Assigned to: Roger Light

Original attachment names and IDs:

On 2015-06-16 05:43:30 -0400, Yun Wu wrote:

int mqtt3_sub_add(struct mosquitto_db *db, struct mosquitto *context, const char *sub, int qos, struct _mosquitto_subhier *root)
{
int rc = 0;
struct _mosquitto_subhier *subhier, *child;
struct _sub_token *tokens = NULL;

assert(root);
assert(sub);

if(_sub_topic_tokenise(sub, &tokens)) return 1;

subhier = root->children;
while(subhier){
if(!strcmp(subhier->topic, tokens->topic)){
rc = _sub_add(db, context, qos, subhier, tokens);
break;
}
subhier = subhier->next;
}
if(!subhier){
child = _mosquitto_malloc(sizeof(struct _mosquitto_subhier));
if(!child){
_sub_topic_tokens_free(tokens);
_mosquitto_log_printf(NULL, MOSQ_LOG_ERR, "Error: Out of memory.");
return MOSQ_ERR_NOMEM;
}
child->topic = _mosquitto_strdup(tokens->topic);
if(!child->topic){
_sub_topic_tokens_free(tokens);
<---------------- The 1st problem: should also _mosquitto_free(child) here.
_mosquitto_log_printf(NULL, MOSQ_LOG_ERR, "Error: Out of memory.");
return MOSQ_ERR_NOMEM;
}
child->subs = NULL;
child->children = NULL;
child->retained = NULL;
if(db->subs.children){
child->next = db->subs.children;
}else{
child->next = NULL;
}
db->subs.children = child;

  rc = _sub_add(db, context, qos, child, tokens); <-------- The 2nd problem: Will child be leaked when entering _sub_add?

}

_sub_topic_tokens_free(tokens);

/* We aren't worried about -1 (already subscribed) return codes. */
if(rc == -1) rc = MOSQ_ERR_SUCCESS;
return rc;
}

The 3rd problem, in _sub_add():

      if(context->subs){
          for(i=0; i<context->sub_count; i++){
              if(!context->subs[i]){
                  context->subs[i] = subhier;
                  break;
              }
          }
          if(i == context->sub_count){
              context->sub_count++;
              subs = _mosquitto_realloc(context->subs, sizeof(struct _mosquitto_subhier *)*context->sub_count);
              if(!subs){ <----- Once this case is hit, context->sub_count is not correct any more.
                  _mosquitto_free(leaf); 
                  return MOSQ_ERR_NOMEM;
              }
              context->subs = subs;
              context->subs[context->sub_count-1] = subhier;
          }
      }else{
          context->sub_count = 1;
          context->subs = _mosquitto_malloc(sizeof(struct _mosquitto_subhier *)*context->sub_count);
          if(!context->subs){
              _mosquitto_free(leaf);
              return MOSQ_ERR_NOMEM;
          }
          context->subs[0] = subhier;
      }

The 4th problem:
Since realloc(NULL, new_size) is equivalent to malloc(new_size), the code snippet listed in the 3rd problem can be simplified as:

      for(i=0; i<context->sub_count; i++){
          if(!context->subs[i]){
              context->subs[i] = subhier;
              break;
          }
      }
      if(i == context->sub_count){
          subs = _mosquitto_realloc(context->subs, sizeof(struct _mosquitto_subhier *)*(context->sub_count + 1));
          if(!subs){
              _mosquitto_free(leaf);
              return MOSQ_ERR_NOMEM;
          }
          context->subs = subs;
          context->sub_count++;
          context->subs[context->sub_count-1] = subhier;
      }

On 2015-06-21 16:21:22 -0400, Roger Light wrote:

Thanks for the report, I think I agree with all of what you've said, I'll make the changes for a future release.

On 2015-06-26 17:19:08 -0400, Roger Light wrote:

Thanks for the patch. Could you please add a comment stating that you comply with the terms here: http://www.eclipse.org/legal/CoO.php

After you have done this I can accept the patch.

On 2015-07-01 23:44:21 -0400, Yun Wu wrote:

Created attachment 254894
Fix a memory leak case in mqtt3_sub_add(); Fix sub_count incorrect problem in _sub_add(); Simplify logic in _sub_add();

On 2015-07-01 23:45:41 -0400, Yun Wu wrote:

Problem 1, 3, and 4 is fixed in the patch attached.
Problem 2, I don't know how to fix.

On 2015-07-02 15:43:31 -0400, Roger Light wrote:

Hi Yun,

Thanks for the updated patch. As I said, if you write a comment here stating you comply with the terms here: http://www.eclipse.org/legal/CoO.php then I can accept it.

On 2015-07-09 04:44:36 -0400, Yun Wu wrote:

I certify that:

  1. I have authored 100% of the contribution.
  2. I have the necessary rights to submit this contribution, including any necessary permissions from my employer.
  3. I am providing this contribution under the license(s) associated with the Eclipse Foundation project I am contributing to.
  4. I understand and agree that Eclipse projects and my contributions are public, and that a record of the contribution (including all personal information I submit with it, including my sign-off) is maintained indefinitely and may be redistributed consistent with the license(s) involved.

On 2015-07-09 04:45:13 -0400, Yun Wu wrote:

Sorry for replying late.
So, is this OK?

On 2015-08-17 17:48:45 -0400, Roger Light wrote:

Sorry for the delay in replying, I've now committed this change.

CRLs need to be easily reloadable

migrated from Bugzilla #465345
status NEW severity normal in component Mosquitto for 1.4
Reported in version 1.4 on platform PC
Assigned to: Roger Light

Original attachment names and IDs:

On 2015-04-23 17:14:09 -0400, Roger Light wrote:

Reported on the mailing list:

http://dev.eclipse.org/mhonarc/lists/mosquitto-dev/msg00601.html

At the moment CRLs can only be reloaded by restarting the broker.

On 2015-08-30 06:40:20 -0400, Benajmin Berg wrote:

Created attachment 256235
patch implementing periodic crl reloading

This is a patch that has been working for me for a while now. It probably still has some rough edges, but might be helpful for anyone who wants to get CRL reload support merged.

broker does not send last will messages sometimes

migrated from Bugzilla #480716
status UNCONFIRMED severity normal in component Mosquitto for 1.4
Reported in version 1.4 on platform PC
Assigned to: Roger Light

On 2015-10-27 02:06:35 -0400, bardya momeni wrote:

Hi Roger...

I've been using mosquitto 1.4.2 and 1.4.4 for some time.
I think there is a problem with last will and testament being send from broker to the client that is subscribed on the last will topic. lets call this client 'server' and the other clients as 'client'

I have counter on my server, when a user connects it sends a message with QoS 2 to my server using the broker (I call it the 'first will'). When my server gets this message it increments the counter. when the user disconnects (only by unexpected disconnect, the client does not send disconnect message) it's last will message must be triggered and served to my server so my server can decrement its counter.

When there are too many and concurrent connections to broker (for about 3K connections and about 30 parallel connect requests to the broker), some last will messages are ignored and not sent to my server, so my counter grow large in number.

for example my counter shows that there are 15K users but the 'ss -tn |grep :1883 |wc -l' command in linux shows about 3K sockets.

In my server I use Paho MQTT library. Mosquitto is configured for unlimited inflight and queued messages.

_mosquitto_socketpair(...) call generally fails (Windows)

migrated from Bugzilla #484693
status RESOLVED severity normal in component Mosquitto for 1.4
Reported in version 1.4 on platform PC
Assigned to: Roger Light

On 2015-12-18 09:50:15 -0500, Steve Woods wrote:

The creation of the socket pair from _mosquitto_connect_init(..) fails 95% of the time as the spW = accept(...) nearly always fails with WSAEWOULDBLOCK (Resource temporarily unavailable). This means that most of the time post's are delayed by an average of 500ms. Removing the non block call to the listen socket, fixes this behaviour. See git diff below (Also has some fixes for leaks if certain phases fail)

diff --git a/lib/net_mosq.c b/lib/net_mosq.c
index 0117ea1..8e9888e 100644
--- a/lib/net_mosq.c
+++ b/lib/net_mosq.c
@@ -1125,10 +1125,6 @@ int _mosquitto_socketpair(mosq_sock_t *pairR, mosq_sock_t *pairW)
continue;
}

  • if(_mosquitto_socket_nonblock(listensock)){
    
  •     continue;
    

- }

  if(family[i] == AF_INET){
      sa->sin_family = family[i];
      sa->sin_addr.s_addr = htonl(INADDR_LOOPBACK);

@@ -1146,12 +1142,11 @@ int _mosquitto_socketpair(mosq_sock_t *pairR, mosq_sock_t *pairW)
}
if(_mosquitto_socket_nonblock(spR)){
COMPAT_CLOSE(listensock);

  •     COMPAT_CLOSE(spR);
      continue;
    

    }
    if(connect(spR, (struct sockaddr *)&ss, ss_len) < 0){
    -#ifdef WIN32
    errno = WSAGetLastError();
    -#endif
    if(errno != EINPROGRESS && errno != COMPAT_EWOULDBLOCK){
    COMPAT_CLOSE(spR);
    COMPAT_CLOSE(listensock);
    @@ -1160,18 +1155,14 @@ int _mosquitto_socketpair(mosq_sock_t *pairR, mosq_sock_t *pairW)
    }
    spW = accept(listensock, NULL, 0);
    if(spW == -1){
    -#ifdef WIN32

  •     errno = WSAGetLastError();
    

    -#endif

  •     if(errno != EINPROGRESS && errno != COMPAT_EWOULDBLOCK){
    
  •         COMPAT_CLOSE(spR);
    
  •         COMPAT_CLOSE(listensock);
    
  •         continue;
    
  •     }
    
  •     COMPAT_CLOSE(spR);
    
  •     COMPAT_CLOSE(listensock);
    
  •     continue;
    

    }

    if(_mosquitto_socket_nonblock(spW)){
    COMPAT_CLOSE(spR);

  •     COMPAT_CLOSE(spW);
      COMPAT_CLOSE(listensock);
      continue;
    

    }

On 2015-12-18 17:03:01 -0500, Roger Light wrote:

It took a long time to replicate this, I see failures very very rarely. The problem that I see is down to _mosquitto_socket_nonblock() failing because the accept() call is incomplete - although that shouldn't matter in reality. Either way, not setting listensock as non-blocking does seem to fix it.

Thanks for the report.

On 2015-12-18 17:12:12 -0500, Roger Light wrote:

*** Bug 479143 has been marked as a duplicate of this bug. ***

Add ability to send messages direct to a client id

migrated from Bugzilla #470564
status NEW severity enhancement in component Mosquitto for 1.4
Reported in version 1.4 on platform All
Assigned to: Roger Light

On 2015-06-19 04:58:03 -0400, Roger Light wrote:

An often requested feature is the ability to send a message to a client based on its clientid, even if the client hasn't made any subscriptions.

Proposal is something like sending a message to $CLIENTS/direct/

Mosquitto leaks File Descriptors for Bridge Socket connections

migrated from Bugzilla #477571
status RESOLVED severity critical in component Mosquitto for 1.4
Reported in version 1.4 on platform PC
Assigned to: Roger Light

Original attachment names and IDs:

On 2015-09-16 09:18:47 -0400, Johnny Egeland wrote:

Created attachment 256613
Configuration file to reproduce the Socket Leak issue

When mosquitto is configured with a Redundant Bridge setup, Mosquitto will Leak Socket File Descriptors when ever a connection is rejected by the Broker(s). Eventually it will exhaust all 1024 available file descriptors and crash. The bug has been confirmed on BOTH un-encrypted and TLS connections.

This problem is confirmed in versions 1.4.2 and 1.4.3 (git HEAD), and we have this problem in live deployments.

The problem can be easily reproduced the following way (using a freshly compiled mosquitto binary from GIT):

  1. You need to make sure the "brokers" rejects the bridge connections from Mosquitto. This can be easily simulated using IP tables (using port 8883 for the example, but the port number is irrelevant):
    sudo iptables -A OUTPUT -p tcp --dport 8883 -j REJECT

2. Configure some "bogus" redundant bridge connection. The addresses does not really matter, as iptables will reject the connection. Here is the configuration I used for reproducing the issue:

Bridge Settings

connection bridgeleaktest
round_robin false
remote_clientid bridgeleaktest
remote_username bridgeleaktest
remote_password notrelevant
keepalive_interval 30
bridge_protocol_version mqttv31

Servers

addresses example.org:8883 example.com:8883

Subscriptions

topic # out 1 / bridgeleaktest/

topic # in 1 / bridgeleaktest/

  1. With the given configuration saved as "mosquitto.conf", start mosquitto as follows:
    mosquitto -v -c mosquitto.conf

The broker is now steadily leaking socket connections, and will eventually run out of file descriptors and crash. To observe the behavior, list the files in:
/proc/<mosquitto_pid>/fd

You will see this list increasing every time it tries to connect to the next broker in the list.

NOTE! If you turn ON "round_robin" (round_robin true), the bug does not happen any more. Then the socket list stays constant. This is our current workaround for this problem.

My exact test configuration file is attached.

On 2015-09-17 04:28:32 -0400, Johnny Egeland wrote:

I've looked into the code, and it seems the problem is caused by the non-round_robin handling in the "mosquitto_main_loop" in loop.c (line 177).

Not sure why yet, but this would explain another conundrum which happens in our production environment:
If we deliberately take down the primary broker, mosquitto will start leaking sockets. This is exactly what happened when we discovered the bug in the first place; the main broker server was taken down (for some time) for maintenance, and we had bridges breaking down all over after a couple of days.

This seems to fit very well to the behaviour in non-round_robin mode, and I suspect it's the connection test algorithm which fails to clean up the socket correctly after closing down the "test" socket.

On 2015-09-17 05:38:43 -0400, Roger Light wrote:

Thanks for the very useful report, I've fixed this in the "fixes" branch. I thought I'd posted this last night already, sorry.

On 2015-09-17 05:58:11 -0400, Johnny Egeland wrote:

No worries. Thanks for a quick fix :-)

But I was wondering if the fix really is correct? It does fix the leak, but I suspect it may cause the connection test to fail even if the connection is really OK.

I looked into what's happening, and the "connect" call returns: EINPROGRESS (not EAGAIN/EWOULDBLOCK)

As far as I know, this really means that the connection was not able to be established in a non-blocking manner, and you need to use select/poll to get the final verdict later (which will then cause a failure if the connection failed).

So my fear is that this fix may cause the connection test to give up, even if it's really possible to connect to the tested server (it just takes some time).

It's a lot trickier to reproduce such a situation, but I'll try and write up a new bug report if this actually is an issue.

On 2015-09-17 08:55:59 -0400, Roger Light wrote:

Agreed, this does still have the possibility of failing to connect. The change fixes the socket leak and still has a good chance of having the connection succeed.

The behaviour isn't any different now from what it was in 1.4.3 at least.

It will be fixed properly as part of another update.

On 2015-09-17 09:32:13 -0400, Johnny Egeland wrote:

Ok :-) As our backup broker servers will actually reject connections as long as the primary server is up, this does not really seem to cause any issues. So in worst case it fails over to "round_robin" behavior, and eventually connects back to the primary server any way. So this is not a big issue for my part.

However, it would be nice if this worked correctly though, even if it's not a very critical problem any more for my part.

Thank you very much for you very quick patch :-)

Socket leak when repeatedly calling connect

migrated from Bugzilla #484692
status CLOSED severity major in component Mosquitto for 1.4
Reported in version 1.4 on platform PC
Assigned to: Roger Light

On 2015-12-18 09:40:52 -0500, Steve Woods wrote:

Repeated calls to connect results in two sockets being leaked in the _mosquitto_socketpair(..) call.

Fix below.

diff --git a/lib/mosquitto.c b/lib/mosquitto.c
index 50ba9eb..e62e67e 100644
--- a/lib/mosquitto.c
+++ b/lib/mosquitto.c
@@ -406,7 +406,16 @@ static int _mosquitto_connect_init(struct mosquitto *mosq, const char *host, int
}

mosq->keepalive = keepalive;

  • if(mosq->sockpairR != INVALID_SOCKET){
  • COMPAT_CLOSE(mosq->sockpairR);
    
  • mosq->sockpairR = INVALID_SOCKET;
    
  • }
  • if(mosq->sockpairW != INVALID_SOCKET){
  • COMPAT_CLOSE(mosq->sockpairW);
    
  • mosq->sockpairW = INVALID_SOCKET;
    
  • }
  • if(_mosquitto_socketpair(&mosq->sockpairR, &mosq->sockpairW)){
    _mosquitto_log_printf(mosq, MOSQ_LOG_WARNING,
    "Warning: Unable to open socket pair, outgoing publish commands may be delayed.");

On 2015-12-18 17:08:05 -0500, Roger Light wrote:

Thanks for the report, but this was already fixed in this changeset:

ralight@SHA: c02cdbe

Enhancement: Get rid of unnecessary malloc/memcpy when processing publish message

migrated from Bugzilla #470258
status RESOLVED severity normal in component Mosquitto for 1.5
Reported in version unspecified on platform PC
Assigned to: Roger Light

On 2015-06-16 05:58:08 -0400, Yun Wu wrote:

mqtt3_handle_publish calls mqtt3_db_message_store.

In mqtt3_db_message_store, source_id, topic and payload are all newly-allocated and copied.

Since topic allocated in mqtt3_handle_publish is not used after mqtt3_db_message_store returned, we can get rid of once of unnecessary payload copy for better performance.

Here is my fix:

  1. Introduce new function:

define F_NEW_SOURCE_ID (1 << 0)

define F_NEW_TOPIC (1 << 1)

define F_NEW_PAYLOAD (1 << 2)

int __mqtt3_db_message_store(int flags, struct mosquitto_db _db, const char *source_id, uint16_t source_mid, const char *topic, int qos, uint32_t payloadlen, const void *payload, int retain, struct mosquitto_msg_store *_stored, dbid_t store_id)
{
struct mosquitto_msg_store *temp = NULL;
char *new_source_id = NULL;
char *new_topic = NULL;
void *new_payload = NULL;

assert(db);
assert(stored);

if (flags & F_NEW_SOURCE_ID) {
new_source_id = _mosquitto_strdup(source_id ? source_id : "");
if(!new_source_id){
goto no_memory;
}
source_id = new_source_id;
}
if (flags & F_NEW_TOPIC) {
if (topic) {
new_topic = _mosquitto_strdup(topic);
if(!new_topic){
goto no_memory;
}
}
topic = new_topic;
}
if (flags & F_NEW_PAYLOAD) {
if (payloadlen && payload) {
new_payload = _mosquitto_malloc(sizeof(char)_payloadlen);
if(!new_payload){
goto no_memory;
}
memcpy(new_payload, payload, sizeof(char)_payloadlen);
}
payload = new_payload;
}

temp = _mosquitto_calloc(1, sizeof(struct mosquitto_msg_store));
if(!temp) goto no_memory;

temp->ref_count = 0;
temp->source_id = (char *)source_id;
temp->source_mid = source_mid;
temp->msg.mid = 0;
temp->msg.qos = qos;
temp->msg.retain = retain;
temp->msg.topic = (char *)topic;
temp->msg.payloadlen = payloadlen;
temp->msg.payload = (void *)payload;
temp->dest_ids = NULL;
temp->dest_id_count = 0;
if(!store_id){
temp->db_id = ++db->last_db_id;
}else{
temp->db_id = store_id;
}

temp->db = db;
LIST_INSERT_HEAD(&db->msg_store, temp, list);
db->msg_store_count++;

(*stored) = temp;
return MOSQ_ERR_SUCCESS;

no_memory:
_mosquitto_log_printf(NULL, MOSQ_LOG_ERR, "Error: Out of memory.");
if (new_source_id) {
_mosquitto_free(new_source_id);
}
if (new_topic) {
_mosquitto_free(new_topic);
}
if (new_payload) {
_mosquitto_free(new_payload);
}
if (temp) {
_mosquitto_free(temp);
}
return MOSQ_ERR_NOMEM;
}

  1. Change implementation of mqtt3_db_message_store:

int mqtt3_db_message_store(struct mosquitto_db _db, const char *source, uint16_t source_mid, const char *topic, int qos, uint32_t payloadlen, const void *payload, int retain, struct mosquitto_msg_store *_stored, dbid_t store_id)
{
return __mqtt3_db_message_store(F_NEW_SOURCE_ID | F_NEW_TOPIC | F_NEW_PAYLOAD, db, source, source_mid, topic, qos, payloadlen, payload, retain, stored, store_id);
}

  1. In mqtt3_handle_publish, call __mqtt3_db_message_store:

if(!stored){
dup = 0;
if(__mqtt3_db_message_store(F_NEW_SOURCE_ID | F_NEW_TOPIC, db, context->id, mid, topic, qos, payloadlen, payload, retain, &stored, 0)){
_mosquitto_free(topic);
if(payload) _mosquitto_free(payload);
return 1;
}
payload = NULL; <-- Add this line
}else{
dup = 1;
}

On 2015-06-28 17:28:05 -0400, Roger Light wrote:

Thanks for the report, I very much agree with the idea but I've implemented it a bit differently. It's in the develop branch.

Socket error on client, disconnecting. 1.4.8 on OSX.

migrated from Bugzilla #488633
status ASSIGNED severity normal in component Mosquitto for 1.4
Reported in version 1.4 on platform PC
Assigned to: Roger Light

On 2016-02-27 19:02:59 -0500, James Lewis wrote:

Here's the test conditions:

Localhost only.

Broker Process:
Run Mosquitto 1.4.8 in verbose mode
$ mosquitto -v

Subscriber Process:
Subscribe to localhost with a test topic "debug".
$ mosquitto_sub -h 127.0.0.1 -i testSub -t debug

Publisher Process:
Publish a message to localhost with the same test topic, "debug".
$ mosquitto_pub -h 127.0.0.1 -i testPublish -t debug -m 'Hello World'

The Broker process will report;
1456617536: New connection from 127.0.0.1 on port 1883.
1456617536: New client connected from 127.0.0.1 as testPublish (c1, k60).
1456617536: Socket error on client testPublish, disconnecting.

No errors are reported from the publish process:
1456617552: New connection from 127.0.0.1 on port 1883.
1456617552: New client connected from 127.0.0.1 as testSub (c1, k60).

On my Mac I will still see the message on the Publisher process. However, I have heard from others that they have intermittent success. Even if I use 1.4.7 or 1.4.5 versions of mosquitto_pub and mosquitto_sub the Broker process will always report a "Socket error on client."

If I change ONLY the broker to 1.4.5, the error never occurs. On the machine that has intermittent success, running 1.4.5 ss the broker (simply replacing only the mosquitto binary) results in expected functioning.

Verified firewall set to "allow incoming connections."

On 2016-03-07 17:28:25 -0500, Roger Light wrote:

Thanks for this, I've found the problem which comes from a Windows related change. I've got a fix for OSX, but need to check there are no regressions on Windows.

Incorrect subscription tree displaying

migrated from Bugzilla #470246
status RESOLVED severity normal in component Mosquitto for 1.5
Reported in version 1.4 on platform All
Assigned to: Roger Light

On 2015-06-16 05:13:37 -0400, Yun Wu wrote:

subscription tree displayed:

[yun@OptiPlex mosquitto-1.4.2]$ ./src/mosquitto
1434444278: mosquitto version 1.4.2 (build date 2015-06-16 16:44:26+0800) starting
1434444278: Using default config.
1434444278: Opening ipv4 listen socket on port 1883.
1434444278: Opening ipv6 listen socket on port 1883.

$SYS
$SYS <-------------------------------- This is incorrect
broker
version (r)
timestamp (r)
uptime (r)
clients
total (r)
inactive (r)
disconnected (r)
active (r)
connected (r)
expired (r)
messages
stored (r)
received (r)
sent (r)
subscriptions
count (r)
retained messages
count (r)
publish
messages
dropped (r)
received (r)
sent (r)
bytes
received (r)
sent (r)
bytes
received (r)
sent (r)

Topics received by mosquitto_sub:

[yun@OptiPlex mosquitto-1.4.2]$ client/mosquitto_sub -t '$SYS/#' -v
$SYS/broker/version mosquitto version 1.4.2
$SYS/broker/timestamp 2015-06-16 16:44:26+0800
$SYS/broker/uptime 88 seconds
$SYS/broker/clients/total 0
$SYS/broker/clients/inactive 0
$SYS/broker/clients/disconnected 0
$SYS/broker/clients/active 0
$SYS/broker/clients/connected 0
$SYS/broker/clients/expired 0
$SYS/broker/messages/stored 42
$SYS/broker/messages/received 2
$SYS/broker/messages/sent 78
$SYS/broker/subscriptions/count 0
$SYS/broker/retained messages/count 42
$SYS/broker/publish/messages/dropped 0
$SYS/broker/publish/messages/received 0
$SYS/broker/publish/messages/sent 76
$SYS/broker/publish/bytes/received 0
$SYS/broker/publish/bytes/sent 296
$SYS/broker/bytes/received 50
$SYS/broker/bytes/sent 2959

On 2015-06-16 05:15:35 -0400, Yun Wu wrote:

This bug does not exist in mosquitto-1.2.
In mosquitto-1.2:

$SYS
broker
version (r)
timestamp (r)
uptime (r)
clients
total (r)
inactive (r)
active (r)
maximum (r)
expired (r)
messages
stored (r)
received (r)
sent (r)
subscriptions
count (r)
retained messages
count (r)
publish
messages
dropped (r)
received (r)
sent (r)
bytes
received (r)
sent (r)
bytes
received (r)
sent (r)
load
messages
received
1min (r)
5min (r)
15min (r)
sent
1min (r)
5min (r)
15min (r)
bytes
received
1min (r)
5min (r)
15min (r)
sent
1min (r)
5min (r)
15min (r)
sockets
1min (r)
5min (r)
15min (r)
connections
1min (r)
5min (r)
15min (r)
publish
received
1min (r)
5min (r)
15min (r)
sent
1min (r)
5min (r)
15min (r)

On 2015-06-21 16:23:01 -0400, Roger Light wrote:

The documentation around this feature says that it is for testing only and could disappear at any point. I don't consider this to be a bug, it's change in the internal behaviour of the broker.

Is there an important use case you need this information for, that you can't get through other means?

On 2015-06-22 22:28:08 -0400, Yun Wu wrote:

I am afraid I can not accept this explanation :-)
Bug is behavior not match expectations, no matter whether it is a official feature or not :-)

On 2015-06-29 16:26:10 -0400, Roger Light wrote:

This has now been fixed here: http://git.eclipse.org/c/mosquitto/org.eclipse.mosquitto.git/commit/?h=develop&id=SHA: a4dad02

No easy way to "discover" a mosquitto server

migrated from Bugzilla #452922
status CLOSED severity enhancement in component Mosquitto for ---
Reported in version unspecified on platform PC
Assigned to: Roger Light

On 2014-11-23 16:52:42 -0500, Roger Light wrote:

(imported from launchpad)

This is definitely "wishlist" rather than "bug".

I've always wondered why messaging resources like a mosquitto broker don't advertise themselves. In my opinion it would make a lot of sense to have them advertise their connection endpoints / ports via DNS SRV records or Bonjour/zeroconf.

On 2016-03-09 16:07:21 -0500, Roger Light wrote:

Won't fix.

Option to disable persistent clients but still retain messages.

migrated from Bugzilla #475517
status UNCONFIRMED severity enhancement in component Mosquitto for 1.4
Reported in version 1.4 on platform PC
Assigned to: Roger Light

On 2015-08-20 13:14:51 -0400, Joe McIlvain wrote:

As far as I can tell, there is no configuration option in mosquitto.conf to disable holding persistent client session information (and queued messages) but keep retained messages only.

It would be useful to have such an option.

On 2015-08-24 17:01:14 -0400, Roger Light wrote:

Sounds like a reasonable request to me. I see no reason that couldn't be in 1.5.

A will published by a client on port with mount_point is later published without mount_point prefix

migrated from Bugzilla #487178
status UNCONFIRMED severity normal in component Mosquitto for 1.4
Reported in version 1.4 on platform PC
Assigned to: Roger Light

On 2016-02-03 18:43:08 -0500, Lance Riley wrote:

If a client connected to a broker port configured to prefix topics using configuration option "mount_point" registers a will topic, and subsequently fails, then the will topic is released by the broker without the mount_point prefix applied. This breaks the client isolation provided by the mount_point option.

To show the problem, edit mosquitto.conf to add "mount_point /ext" for a given non standard port (1884).

Prove normal operation by subscribing to the standard port that has no mount_point prefix defined and then publish via the port with the mount_point prefix:

echo "xxx" | mosquitto_pub -p 1884 -t /TEST -l --will-topic /TEST --will-payload will

Verbose subscribe client on standard port receives: "/ext/TEST xxx"

Now publish with no message from stdin and abort with ^C to trigger will publication:

mosquitto_pub -p 1884 -t /TEST -l --will-topic /TEST --will-payload will
^C

Verbose subscribe client on standard port receives: "/TEST will" which has not got the expected prefix.

Additionally a client subscribed to all topics on the non standard broker port with the mount_point prefix defined will not receive the will message as it is generated without the prefix and so filtered out for this port.

On 2016-02-11 08:41:38 -0500, Lance Riley wrote:

Just to clarify the point about client isolation, this issue allows a client to inject arbitrary topics and messages into the broker that are not constrained by the mount_point restrictions imposed on it by the port it is connected through.

In a secure installation where the root topic port were protected by TLS, etc. but also having an additional insecure port but with scope restricted by a mount_point prefix, a client connected to that insecure port would be able to inject topics without the prefix by first registering a will and then closing the connection to simulate a client failure without going through the normal closure sequence.

On 2016-02-11 16:29:37 -0500, Roger Light wrote:

Thanks very much for the report and the useful instructions.

This is now fixed in the fixes branch and will be part of 1.4.8 shortly.

On 2016-02-17 04:54:37 -0500, Lance Riley wrote:

Hi Roger.

Thanks for the fix. I've noticed however that under the original test conditions (mount_point /ext) with version 1.4.8 on Raspbian, I am now seeing the correct will mount point prefix, but the final character of the will topic is cropped.

mosquitto_pub -p 1884 -t /NORMAL -l --will-topic /WILL --will-payload will
^C

Publishes:

/ext/WIL will

I've changed the bug status back to unconfirmed - I hope that is the correct thing to do.

Thanks,
Lance.

On 2016-02-29 12:17:37 -0500, Lance Riley wrote:

Hi Roger.

The following change fixes the problem ...

Edit: mosquitto-1.4.8/src/read_handle_server.c

slen = strlen(context->listener->mount_point) + strlen(will_topic);
will_topic_mount = _mosquitto_malloc(slen+1);
if(!will_topic_mount){
rc = MOSQ_ERR_NOMEM;
goto handle_connect_error;
}
snprintf(will_topic_mount, slen, "%s%s", context->listener->mount_point, will_topic);

In the snprintf() change arg 2 to "slen+1" to match the malloc. Arg 2 is the max count including the /0 terminator.

From the man page:

"The functions snprintf() and vsnprintf() write at most size bytes (including the terminating null byte ('\0')) to str."

Thanks,
Lance.

bus fault in mqtt3_subs_clean_session with musl libc (openwrt)

migrated from Bugzilla #475707
status CLOSED severity normal in component Mosquitto for 1.4
Reported in version 1.4 on platform PC
Assigned to: Roger Light

Original attachment names and IDs:

On 2015-08-24 09:34:57 -0400, Karl Palsson wrote:

Presumably tied to openwrt's switch to musl libc, but I haven't rebuilt it all with uclibc again. Reported with 1.4.2 and confirmed with 1.4.3 as well.

Starting the broker as just "mosquitto -v" (no config file) and then publishing a single message from a remote client (mosquitto_sub) is sufficient to crash the broker with a bus fault.

Program received signal SIGBUS, Bus error.
0x77f78b30 in malloc_usable_size ()
from /home/karlp/src/openwrt-trunk-upstream/scripts/../staging_dir/target-mips_34kc_musl-1.1.10/root-ar71xx/lib/ld-musl-mips-sf.so.1
(gdb) bt
Python Exception <type 'exceptions.ImportError'> No module named gdb.frames:

0 0x77f78b30 in malloc_usable_size ()

from /home/karlp/src/openwrt-trunk-upstream/scripts/../staging_dir/target-mips_34kc_musl-1.1.10/root-ar71xx/lib/ld-musl-mips-sf.so.1

1 0x00408ca2 in _mosquitto_free (mem=0x0) at ../lib/memory_mosq.c:57

2 0x0041007c in mqtt3_subs_clean_session (db=db@entry=0x428280 <int_db>, context=context@entry=0x77d81290) at subs.c:631

3 0x00407208 in mqtt3_context_cleanup (db=db@entry=0x428280 <int_db>, context=0x77d81290, do_free=do_free@entry=true)

at context.c:138

4 0x00407454 in mosquitto__free_disused_contexts (db=db@entry=0x428280 <int_db>) at context.c:224

5 0x004085ae in mosquitto_main_loop (db=db@entry=0x428280 <int_db>, listensock=listensock@entry=0x77ff2ad0,

listensock_count=listensock_count@entry=2, listener_max=listener_max@entry=4) at loop.c:130

6 0x00402f1e in main (argc=, argv=) at mosquitto.c:366

(gdb) up

1 0x00408ca2 in _mosquitto_free (mem=0x0) at ../lib/memory_mosq.c:57

57 memcount -= malloc_usable_size(mem);
(gdb) up

2 0x0041007c in mqtt3_subs_clean_session (db=db@entry=0x428280 <int_db>, context=context@entry=0x77d81290) at subs.c:631

631 _mosquitto_free(context->subs);
(gdb) info locals
i = 0
leaf =
hier =
(gdb) info args
db = 0x428280 <int_db>
context = 0x77d81290
(gdb) p *context
$1 = {sock = -1, protocol = mosq_p_mqtt31, address = 0x77d811d0 "192.168.255.124", id = 0x0, username = 0x0,
password = 0x0, keepalive = 60, last_mid = 0, state = mosq_cs_disconnected, last_msg_in = 2660, last_msg_out = 2660,
ping_t = 0, in_packet = {payload = 0x0, next = 0x0, remaining_mult = 1, remaining_length = 0, packet_length = 0,
to_process = 0, pos = 0, mid = 0, command = 0 '\000', remaining_count = 0 '\000'}, current_out_packet = 0x0,
out_packet = 0x0, will = 0x0, ssl = 0x0, ssl_ctx = 0x0, tls_cafile = 0x0, tls_capath = 0x0, tls_certfile = 0x0,
tls_keyfile = 0x0, tls_pw_callback = 0x0, tls_version = 0x0, tls_ciphers = 0x0, tls_psk = 0x0, tls_psk_identity = 0x0,
tls_cert_reqs = 0, tls_insecure = false, want_write = false, want_connect = false, clean_session = true,
is_dropping = false, is_bridge = false, bridge = 0x0, msgs = 0x0, last_msg = 0x0, msg_count = 0, msg_count12 = 0,
acl_list = 0x0, listener = 0x0, disconnect_t = 1440421836, out_packet_last = 0x0, subs = 0x0, sub_count = 0,
pollfd_index = 2, ws_context = 0x0, wsi = 0x0, hh_id = {tbl = 0x77d81560, prev = 0x0, next = 0x0, hh_prev = 0x0,
hh_next = 0x0, key = 0x77d81540, keylen = 18, hashv = 974165546}, hh_sock = {tbl = 0x77d811f0, prev = 0x0,
next = 0x0, hh_prev = 0x0, hh_next = 0x0, key = 0x77d81290, keylen = 4, hashv = 1015908205}, for_free_next = 0x0}
(gdb)

Running again, context->subs and context->sub_count are both 0 at entry to this function.

Grossly adding a check on ->subs before the _mosquitto_free call works, but I'm not sure if that's the real fix or not.

On 2015-08-24 15:17:34 -0400, Roger Light wrote:

Created attachment 256079
General fix

On 2015-08-24 16:08:51 -0400, Roger Light wrote:

This is also fixed in the develop branch:

http://git.eclipse.org/c/mosquitto/org.eclipse.mosquitto.git/commit/?h=develop&id=SHA: 3f86d31

On 2015-09-16 16:22:10 -0400, Roger Light wrote:

Fixed in the fixes branch as well.

protocol errors for QoS1 and Qos2 in a congested environment

migrated from Bugzilla #486891
status UNCONFIRMED severity major in component Mosquitto for 1.4
Reported in version 1.4 on platform All
Assigned to: Roger Light

On 2016-01-30 11:05:07 -0500, Christoph Krey wrote:

Broker 1.4.4
Client libmosquitto 1.4.7

Congestion situation arises because initial SUBSCRIBE triggers the transmission
of about 1000 retained messages, each about 1500 bytes long.

Broker publishes Qos1 and Qos2 messages...
Client repeats PUBRECs although PUBREL was send by broker and PUBCOMP received, leading to connection close

connection re-established is closed repeatedly because of unexpected PUBREL received by broker

This situation repeats until broker decides to answer the PUBRECs with PUBRELs instead of closing the connection.

After reconnection, broker re-sends (dup=1) Qos1 messages, but sends some of them 2 or 3 times.

After reconnection, broker sends (dup=0) multiple Qos1 messages with the same messageId
without receiving PUBACK in between.

Log will be sent directly to roger

On 2016-02-11 16:55:19 -0500, Roger Light wrote:

*** Bug 486892 has been marked as a duplicate of this bug. ***

CONNACK Message not sent for Unauthorized Connect Request for Websocket

migrated from Bugzilla #484761
status UNCONFIRMED severity critical in component Mosquitto for 1.4
Reported in version 1.4 on platform PC
Assigned to: Roger Light

On 2015-12-21 08:43:30 -0500, koray sariteke wrote:

For Authentication, we implemented mosquitto_plugin.h functions.

When we Start mqtt broker in Websocket protocol, CONNACK with result code 5 IS NOT sent back to client. TCP client is disconnected by Mosquitto Server without information.

Up to MQTT RFC
http://docs.oasis-open.org/mqtt/mqtt/v3.1.1/os/mqtt-v3.1.1-os.html#_Toc398718035

If a well formed CONNECT Packet is received by the Server, but the Server is unable to process it for some reason, then the Server SHOULD attempt to send a CONNACK packet containing the appropriate non-zero Connect return code from this table. If a server sends a CONNACK packet containing a non-zero return code it MUST then close the Network Connection [MQTT-3.2.2-5].

On 2015-12-21 09:02:10 -0500, koray sariteke wrote:

vi read_handle_server.c +364

ifdef REAL_WITH_TLS_PSK

            }

endif /* REAL_WITH_TLS_PSK */

    }else{

endif /* WITH_TLS */

            if(username_flag){
                    rc = mosquitto_unpwd_check(db, username, password);
                    switch(rc){
                            case MOSQ_ERR_SUCCESS:
                                    break;
                            case MOSQ_ERR_AUTH:
                                    _mosquitto_send_connack(context, 0, CONNACK_REFUSED_NOT_AUTHORIZED);
                                    mqtt3_context_disconnect(db, context);
                                    rc = 0;
                                    goto handle_connect_error;
                                    break;

if mqtt3_context_disconnect(db, context); line removed and rc set to 0, CONNACK sent to client with reason code 5 but TCP connection stays open which is not true state.

cmake does not find openssl on Mac OS X El Capitan

migrated from Bugzilla #478888
status UNCONFIRMED severity normal in component Mosquitto for 1.4
Reported in version 1.4 on platform PC
Assigned to: Roger Light

On 2015-10-02 06:21:15 -0400, Christoph Krey wrote:

$ cmake ..
CMake Error at /usr/local/Cellar/cmake/3.3.2/share/cmake/Modules/FindPackageHandleStandardArgs.cmake:148 (message):
Could NOT find OpenSSL, try to set the path to OpenSSL root folder in the
system variable OPENSSL_ROOT_DIR (missing: OPENSSL_INCLUDE_DIR)
Call Stack (most recent call first):
/usr/local/Cellar/cmake/3.3.2/share/cmake/Modules/FindPackageHandleStandardArgs.cmake:388 (_FPHSA_FAILURE_MESSAGE)
/usr/local/Cellar/cmake/3.3.2/share/cmake/Modules/FindOpenSSL.cmake:334 (find_package_handle_standard_args)
CMakeLists.txt:61 (find_package)

-- Configuring incomplete, errors occurred!
See also "/Users/ckrey/org.eclipse.mosquitto/build/CMakeFiles/CMakeOutput.log".

Might be related to Homebrew/legacy-homebrew#44375

On 2015-10-02 08:50:01 -0400, Roger Light wrote:

Do the headers exist? From what I've been reading, it seems like the openssl headers have been removed on El Capitan.

On 2015-10-02 12:01:25 -0400, Christoph Krey wrote:

Yes, as mentioned in the home-brew issue above, the headers are missing.

It helps to compile and link against home-brew's openssl

cmake -DOPENSSL_INCLUDE_DIR=/usr/local/Cellar/openssl/1.0.2d_1/include -DOPENSSL_CRYPTO_LIBRARY=/usr/local/Cellar/openssl/1.0.2d_1/lib/libcrypto.dylib -DOPENSSL_SSL_LIBRARY=/usr/local/Cellar/openssl/1.0.2d_1/lib/libssl.dylib ..

I'm not a cmake expert, so I don't know how to setup the dependency agains the openssl version of openssl rather than the missing Apple's

Add Support for WITH_STRIP=y/n

migrated from Bugzilla #482782
status UNCONFIRMED severity enhancement in component Mosquitto for 1.4
Reported in version unspecified on platform All
Assigned to: Roger Light

On 2015-11-22 06:53:08 -0500, Michael Bjerking wrote:

While compile on "QNAP TS-419", I couldn't run "make install" because of the "strip" commands isn't supported (and setting e.g "make install STRIP=/bin/true" didn't work either).

I suggest that the package support an option "WITH_STRIP" to enable/disable the install command using "-s".

The installation on my NAS worked fine when I removed "-s --strip-program=${CROSS_COMPILE}${STRIP}" from below lines:

source "org.eclipse.mosquitto-1.4.5":
$(INSTALL) -s --strip-program=${CROSS_COMPILE}${STRIP} mosquitto_pub ${DESTDIR}${prefix}/bin/mosquitto_pub
$(INSTALL) -s --strip-program=${CROSS_COMPILE}${STRIP} mosquitto_sub ${DESTDIR}${prefix}/bin/mosquitto_sub
$(INSTALL) -s --strip-program=${CROSS_COMPILE}${STRIP} libmosquittopp.so.${SOVERSION} ${DESTDIR}${prefix}/lib${LIB_SUFFIX}/libmosquittopp.so.${SOVERSION}
$(INSTALL) -s --strip-program=${CROSS_COMPILE}${STRIP} libmosquitto.so.${SOVERSION} ${DESTDIR}${prefix}/lib${LIB_SUFFIX}/libmosquitto.so.${SOVERSION}
$(INSTALL) -s --strip-program=${CROSS_COMPILE}${STRIP} mosquitto ${DESTDIR}${prefix}/sbin/mosquitto
$(INSTALL) -s --strip-program=${CROSS_COMPILE}${STRIP} mosquitto_passwd ${DESTDIR}${prefix}/bin/mosquitto_passwd

cannot websocket client receive queued messages

migrated from Bugzilla #476314
status RESOLVED severity normal in component Mosquitto for 1.4
Reported in version 1.4 on platform All
Assigned to: Roger Light

Original attachment names and IDs:

On 2015-09-01 09:28:00 -0400, Joking Young wrote:

Created attachment 256288
broker debug information

when ues Paho.MQTT to connect to broker with cleanSession flag set as false, and subscribe a topic with qos=1, and then disconnect, messages queued will not be received when the client reconnect to the broker, however, when i use a mqtt client with the same client id to connect then, the queued messages is received.

the attachment is the broker's debug information, the connection from localhost is created by mosquitto_pub or mosquitto_sub, and others is created by websocket(Paho.MQTT.Client).

On 2016-02-20 04:34:09 -0500, Christoph Krey wrote:

I just found and analyzed the same problem. Websockets subscriber does not receive
queued messages after cleansession=NO, disconnect, reconnect immediately after
reconnect, but only when a new message is published while Websockets subscriber is connected.

Steps to recreate:
0 connect with cleansession=NO, client gets sessionpresent=NO, subscribe QoS=1, abort
1 connect with cleansession=NO, client gets sessionpresent=YES, subscribe QoS=1, abort
2 connect with other client, publish message 1 QoS=1
3 connect with cleansession=NO, client gets sessionpresent=YES, subscribe QoS=1

MQTT now receives queued message, WS client doesn't

4 connect with cleansession=NO, client gets sessionpresent=YES, subscribe QoS=1
5 connect with other client, publish message 2 QoS=1

WS client now receives message 1 and message 2

(see commented mosquitto.log attached)

Broker: mosquitto 1.4.8
MQTT-Client: mosquitto_sub/pub 1.4.8
WS-Client: MQTT-Client iOS

On 2016-02-20 04:35:04 -0500, Christoph Krey wrote:

Created attachment 259845
Websocket client not receiving queued messages

On 2016-03-05 17:38:57 -0500, Roger Light wrote:

There is a change that fixes the problem for me on the fixes branch, could you please take a look at it?

On 2016-03-06 09:20:15 -0500, Christoph Krey wrote:

Unfortunately not, still the same behavior (see attached log)

On 2016-03-06 09:23:04 -0500, Christoph Krey wrote:

Created attachment 260112
mosquitto log

org.eclipse.mosquitto-SHA: bf959ef build

On 2016-03-06 09:40:57 -0500, Roger Light wrote:

SHA: bf959ef is the head of master, not of fixes. Could you confirm you tried with the fixes branch?

On 2016-03-06 12:40:23 -0500, Christoph Krey wrote:

OOPS

trying to build from 'org.eclipse.mosquitto-fixes.tar.gz' now:

getting compile error:

cc -Wall -ggdb -O2 -I. -I.. -I../lib -DVERSION=""1.4.8"" -DTIMESTAMP=""2016-03-06 17:39:00+0000"" -DWITH_BROKER -DWITH_TLS -DWITH_TLS_PSK -DWITH_UUID -DWITH_BRIDGE -DWITH_PERSISTENCE -DWITH_MEMORY_TRACKING -DWITH_SYS_TREE -DWITH_WEBSOCKETS -DWITH_EC -c websockets.c -o websockets.o
websockets.c: In function �callback_mqtt�:
websockets.c:270:9: error: �struct mosquitto� has no member named �last_msg_out�
mosq->last_msg_out = mosquitto_time();
^
Makefile:93: recipe for target 'websockets.o' failed
make[1]: *** [websockets.o] Error 1

On 2016-03-06 12:45:03 -0500, Christoph Krey wrote:

same here org.eclipse.mosquitto-SHA: d9142c3

On 2016-03-06 14:51:01 -0500, Roger Light wrote:

OOPS :)

I've fixed that now.

On 2016-03-07 05:06:14 -0500, Christoph Krey wrote:

Thanks Roger,
I can confirm the problem is fixed!

On 2016-03-07 06:14:48 -0500, Roger Light wrote:

Great, thanks for confirming.

On 2016-03-10 01:53:04 -0500, Joking Young wrote:

That sounds wonderful, thanks very much!

mqtt3_db_backup is not called on every message when autosave_interval is set to 1

migrated from Bugzilla #465438
status RESOLVED severity normal in component Mosquitto for 1.5
Reported in version 1.3.x on platform PC
Assigned to: Roger Light

On 2015-04-24 10:51:31 -0400, Jaime Yu wrote:

Noticed a possible bug when the configuration file is set to autosave on every 1 message.

The line "db->persistence_changes > db->config->autosave_interval" uses a 'greater than' when I think a 'greater or equal than' operation is preferred. When db->persistence_changes is set to 1 and autosave_interval is set to 1, this will not trigger the mqtt3_db_backup. Only after I send 2 (qos2)messages to the broker, will db->persistence_changes be set to 2 and then trigger the database backup.

I did notice that the man page(http://mosquitto.org/man/mosquitto-conf-5.html) says 'if the total exceeds autosave_interval' but I when I set autosave_interval to 0 with 'autosave_on_changes true' in the configuration file, mqtt3_db_backup is never called in 3.1.5 until shutdown. So maybe this is the root-cause if I stick with the wording of the man page.

Let me know if you are unable to reproduce.

Here is a diff of the potential fix. Only tested on FreeBSD 10 but the problem is not OS specific.

Index: mosquitto/src/loop.c

--- mosquitto/src/loop.c (revision 1)
+++ mosquitto/src/loop.c (working copy)
@@ -269,7 +269,7 @@
#ifdef WITH_PERSISTENCE
if(db->config->persistence && db->config->autosave_interval){
if(db->config->autosave_on_changes){

  •                           if(db->persistence_changes > db->config->autosave_interval){
    
  •                           if(db->persistence_changes >= db->config->autosave_interval){
                                    mqtt3_db_backup(db, false, false);
                                    db->persistence_changes = 0;
    

Here is my conf file:
autosave_on_changes true
autosave_interval 1
persistence_file mosquitto.db
persistence_location /tmp/
persistence true
store_clean_interval 0
queue_qos0_messages true
max_queued_messages 100
sys_interval 0

log_dest file /tmp/mosq-logs

log_type error
log_timestamp true

On 2015-04-24 16:09:38 -0400, Roger Light wrote:

I think this is almost a philosophical point :)

The documentation is correct, but you could certainly argue that it shouldn't be. Using an autosave_interval value of 0 means that the persistence file is only written when the broker exits. As it stands it isn't possible to save per every message.

I would point out that as it stands the persistence file is written out in its entirety every time, so writing it once per message really isn't a good idea.

On 2015-04-25 14:59:24 -0400, Jaime Yu wrote:

(In reply to Roger Light from comment # 1)

I think this is almost a philosophical point :)

The documentation is correct, but you could certainly argue that it
shouldn't be. Using an autosave_interval value of 0 means that the
persistence file is only written when the broker exits. As it stands it
isn't possible to save per every message.

I would point out that as it stands the persistence file is written out in
its entirety every time, so writing it once per message really isn't a good
idea.

It is one of our requirements to have the database written out on every message to avoid/minimize any data loss due to a crash. We already understood the penalty (it writes out the entire database each time) and are willing to take the hit (hence the patch).

I agree, the documentation is correct. At the very least, it could be updated to reflect that a value of 1 and 0 disable writing to the file to avoid a similar assumption that it could write the database out on every message.

Should it be able to write out on every message? Our requirements says yes! but I'm not sure if anyone else is using it as we are (and whether they made a similar assumption and are not aware).

Thanks.

On 2015-06-29 12:13:32 -0400, Roger Light wrote:

This has now been fixed in http://git.eclipse.org/c/mosquitto/org.eclipse.mosquitto.git/commit/?h=fixes&id=SHA: e0037b3

Download page http://mosquitto.org/download/ has faulty links

migrated from Bugzilla #485986
status RESOLVED severity normal in component Mosquitto for 1.4
Reported in version unspecified on platform All
Assigned to: Roger Light

On 2016-01-16 03:53:35 -0500, Lance Riley wrote:

Sorry if this is not the correct place to report this, but I couldn't see anywhere more appropriate ...

The http://mosquitto.org/download/ download page has several faulty links.

  • The first source download link is titled 1.4.7 but actually links to the older version 1.4.5 source code tar.
  • The 2 Windows binary download links are titled 1.4.5 but link to the latest 1.4.7 executables.

All the other links look ok, including the GPG signature link for the source tar mentioned above.

Thanks.

On 2016-02-11 16:31:18 -0500, Roger Light wrote:

Thanks for this as well, I did fix the links shortly after you reported it but didn't update the bug.

CPU Overload with a TCP or MQTT connection/disconnection session.

migrated from Bugzilla #485143
status RESOLVED severity normal in component Mosquitto for 1.4
Reported in version 1.4 on platform PC
Assigned to: Roger Light

Original attachment names and IDs:

On 2016-01-04 09:11:22 -0500, Pierre-Yves BOISBUNON wrote:

On windows 7 / 10 /Server 2012 environment,

  • A TCP client connection followed by a disconnection increases CPU loads for a 90s period. This behavior seems to be present in all versions.
  • A MQTT client connection followed by a disconnection increases CPU loads for keepalive time duration x 1.5. This behavior seems to be present from version 1.4.x.

Any idea?

On 2016-01-04 11:09:00 -0500, Roger Light wrote:

What do you mean by a TCP connect/disconnect? A BSD socket connect() followed by close()? Or does e.g. "mosquitto_pub -t foo -m bar" produce this problem for you?

On 2016-01-05 04:35:18 -0500, Pierre-Yves BOISBUNON wrote:

Here is our test sequence:

mosquitto_pub.exe -t 0/rc/10 -m bar -p 5500

1451977925: New connection from ::1 on port 5500.

1451977925: New client connected from ::1 as mosqpub/3516-Pascal_Cha (c1, k60).

1451977925: Sending CONNACK to mosqpub/3516-Pascal_Cha (0, 0)

1451977925: Received PUBLISH from mosqpub/3516-Pascal_Cha (d0, q0, r0, m0, '0/rc/10', ... (3 bytes))

1451977925: Received DISCONNECT from mosqpub/3516-Pascal_Cha

1451977925: Client mosqpub/3516-Pascal_Cha disconnected.

Test OK

Connect Mqtt Client (keepalive : 10s)

1451979015: New connection from 127.0.0.1 on port 5500.

1451979015: New client connected from 127.0.0.1 as 1 (c0, k10, u'1').

1451979015: Sending CONNACK to 1 (0, 0)

1451979015: Received PUBLISH from 1 (d0, q0, r0, m0, '0/rc/1', ... (5 bytes))

1451979020: Received PUBLISH from 1 (d0, q0, r0, m0, '0/rc/1', ... (5 bytes))

1451979025: Received PUBLISH from 1 (d0, q0, r0, m0, '0/rc/1', ... (5 bytes))

1451979030: Received PUBLISH from 1 (d0, q0, r0, m0, '0/rc/1', ... (5 bytes))

Disconnect

1451979035: Received DISCONNECT from 1

1451979035: Client 1 disconnected.

Test OK

Connect Mqtt Client (keepalive : 10s)

1451978205: New connection from 127.0.0.1 on port 5500.

1451978205: New client connected from 127.0.0.1 as 1 (c0, k10, u'1').

1451978205: Sending CONNACK to 1 (0, 0)

1451978205: Received PUBLISH from 1 (d0, q0, r0, m0, '0/rc/1', ... (5 bytes))

1451978210: Received PUBLISH from 1 (d0, q0, r0, m0, '0/rc/1', ... (5 bytes))

1451978215: Received PUBLISH from 1 (d0, q0, r0, m0, '0/rc/1', ... (5 bytes))

Kill Mqtt Client

100% CPU 15s

1451978230: Client 1 has exceeded timeout, disconnecting.

1451978230: Socket error on client 1, disconnecting.

Connect Socket TCP

1451977972: New connection from 127.0.0.1 on port 5500.

Close Socket TCP

100% CPU 90s :

1451978061: Client has exceeded timeout, disconnecting.

1451978061: Socket error on client , disconnecting.

On 2016-01-06 11:38:48 -0500, Roger Light wrote:

Thanks, I'll take a look.

On 2016-01-14 08:49:23 -0500, Pierre-Yves BOISBUNON wrote:

Any update on this issue?

On 2016-01-14 16:21:27 -0500, Roger Light wrote:

Yes, sorry for the delay.

I believe this commit fixes the problem:

http://git.eclipse.org/c/mosquitto/org.eclipse.mosquitto.git/commit/?h=fixes&id=SHA: 00491da

Please reopen the bug if you disagree.

On 2016-01-15 05:49:27 -0500, Pierre-Yves BOISBUNON wrote:

Dear Roger

Thank you very much for you fix.

Could you please help to generate a native windows installer?
We do not found any trouble with cygwin based exe.

Thanks,

On 2016-01-15 07:33:09 -0500, Roger Light wrote:

This is set up to work easily with appveyor, so I've just pushed the changes to make it happen.

https://ci.appveyor.com/project/ralight/mosquitto/build/artifacts

The installer there is called 1.4.7, but it actually includes this fix.

On 2016-01-15 08:02:51 -0500, Pierre-Yves BOISBUNON wrote:

Thanks Roger for the installer.
I'm sorry but the issue is still there.
We do not see any change.

On 2016-01-15 10:14:41 -0500, Roger Light wrote:

In that case I can no longer duplicate the bug. What version of Windows are you using? I am on Windows 7.

It might be worth trying to compile yourself. You should use CMake to generate Visual Studio projects. The openssl files you need are at https://slproweb.com/products/Win32OpenSSL.html "Win32 OpenSSL v1.0.2e". You can disable pthreads support because it is only the broker we are worried about for this bug.

On 2016-01-19 08:17:00 -0500, Pierre-Yves BOISBUNON wrote:

Created attachment 259258
VS TCP Test Project

Dear Roger

The problem still exists on Windows 7.
I enclose you the TCP test project related to the TCP part problem of this thread.

Hope it helps.

Thanks.

On 2016-01-19 10:36:06 -0500, Roger Light wrote:

Thanks, I'll take a look on Thursday hopefully.

On 2016-01-20 10:08:40 -0500, Wilfried Chauveau wrote:

Created attachment 259275
add POLLHUP flag to disconnection handling code.

On 2016-01-20 10:10:07 -0500, Wilfried Chauveau wrote:

Hello,

I fixed the issue by adding the flag POLLHUP to (POLLERR | POLLNVAR).

Best regards,
Wilfried Chauveau.

On 2016-01-28 17:09:40 -0500, Roger Light wrote:

Thanks both. I forgot to update this bug after I found the POLLHUP problem myself and pushed an update. I believe it is definitely fixed this time - see the fixes branch again.

Clients not receiving subscribed messages at volume

migrated from Bugzilla #485952
status UNCONFIRMED severity normal in component Mosquitto for 1.4
Reported in version 1.4 on platform PC
Assigned to: Roger Light

On 2016-01-15 12:05:29 -0500, Gary Gershon wrote:

Hi Roger,

We are experiencing behavior problems on 1.4.7 with message subscription volumes on the order of 20,000 messages per minute of 60 byte messages.

Since we're seeing no issues in publishing messages to the server at these rates, and the server utilization and network volumes are low, it appears that the issue is related to the subscribing message code path.

The first connected client seems to work without errors, but subsequent clients on the same topic keep failing. But this may be just the effects of additional load.

The problem appears to occur with both Qos 0 and 1 with mosquitto_sub and Paho Java clients.

From the trace (below), the server is logging PUBLISH (and PINGRESP) messages to these stalled clients, but the clients aren't receiving the messages.

From a browse of the code, I see there are a number of steps and a packet queue between logging (using debug) the PUBLISH and when the real socket write occurs.

The _mosquitto_packet_queue and and _mosquitto_packet_write routines are daunting to follow given their mutex lock, linked list, callback, and state tests.

Roger, we'd appreciate your assistance to resolve this. We've spent a few days tracing the behavior and browsing the code.

Perhaps we should we try a version that predates the Web Socket support modifications?

Thanks,
Gary

There are no observed problems at lower volumes on the order of 1,000 messages/second.

If I start multiple clients on the same workstation to the server, each will ultimately fail at different times subscribing to the same topic. Watching the terminal windows side-by-side, I will see one window stall while others continue to scroll by.

The server is running at a low utilization (<10%) with mosquitto running in a Docker container at 1-3% average utilization. The server is hosted in an Amazon Linux image and is assigned a single core of a Intel CPU. All the workload on this Linux AMI is in containers running Java apps for publishing and subscribing. One would expect that my AMI is sharing CPU resources with many others.

After the client stalls, it sends a PINGREQ after some number of seconds, but this does not restart the stream. A minute later, the client disconnects (and then reconnects).

There are currently only two topics, with Paho clients publishing to each. One device and topic is about 300 messages/second. The other topic is fed from a message aggregator at 20,000 messages/second. There are normally only three or four subscribers doing logging and analytics. The problem is when a subscriber includes the high-volume topic (it uses # to combine the two topics, but also fails using just the higher-volume topic name).

Server is 1.4.7 using Dockerfile slightly modified from https://github.com/toke/docker-mosquitto (debian:jessie on top of ubuntu 14.04 AMI)

Java clients are Paho 1.0.2 with Java 8 on AWS Linux and Apple MB El Capitan.

mosquitto_sub version 1.4.5 running on libmosquitto 1.4.5 on Apple MB.

Client trace with q0 - Client stopped receiving PUBLISH messages, client sent PINGREQ, and a PINGRESP was logged by server.

But no PUBLISH stream was received. After 60 seconds there was a disconnect and reconnect

Client mosqsub/58985-GMG-MB-Pr received PUBLISH (d0, q0, r0, m0, 'Source/AIS/AISHuB', ... (62 bytes))
Source/AIS/AISHuB !AIVDM,1,1,,A,8h3OwjQKP@4GCJPPP121IoCol54cd0AwwwwwwWraTP0,2_66
Client mosqsub/58985-GMG-MB-Pr received PUBLISH (d0, q0, r0, m0, 'Source/AIS/AISHuB', ... (46 bytes))
Source/AIS/AISHuB !AIVDM,1,1,,B,Dh3OwjPflnfpMaF9HNAF9HNqF9H,2_63
Client mosqsub/58985-GMG-MB-Pr received PUBLISH (d0, q0, r0, m0, 'Source/AIS/AISHuB', ... (47 bytes))
Source/AIS/AISHuB !AIVDM,1,1,,A,133nbePP00PD>i8MDF7h0?w02><,0*0D Client mosqsub/58985-GMG-MB-Pr received PUBLISH (d0, q0, r0, m0, 'Source/AIS/AISHuB', ... (47 bytes)) Source/AIS/AISHuB !AIVDM,1,1,,B,13aOBk00000BbIbMfR0Q31W20><,0_36

Client mosqsub/58985-GMG-MB-Pr sending PINGREQ

Client mosqsub/58985-GMG-MB-Pr sending CONNECT
Client mosqsub/58985-GMG-MB-Pr received CONNACK
Client mosqsub/58985-GMG-MB-Pr sending SUBSCRIBE (Mid: 2, Topic: Source/AIS/#, QoS: 0)
Client mosqsub/58985-GMG-MB-Pr received SUBACK Subscribed (mid: 2): 0

Client mosqsub/58985-GMG-MB-Pr received PUBLISH (d0, q0, r0, m0, 'Source/AIS/AISHuB', ... (47 bytes))
Source/AIS/AISHuB !AIVDM,1,1,,B,36:Bd7Q000T97NFkTDd<GH>2><,0_1A
Client mosqsub/58985-GMG-MB-Pr received PUBLISH (d0, q0, r0, m0, 'Source/AIS/AISHuB', ... (47 bytes))
Source/AIS/AISHuB !AIVDM,1,1,,B,16:=HgPP00d<:0AtiFRCOv<0><,0_64
Client mosqsub/58985-GMG-MB-Pr received PUBLISH (d0, q0, r0, m0, 'Source/AIS/AISHuB', ... (71 bytes))
Source/AIS/AISHuB !AIVDM,1,1,,B,85NoHR1Kf1HgQVqnP?uKWerkmrKi3J9ptKFg@1urjDiKqn0N@a58,0_57
Client mosqsub/58985-GMG-MB-Pr received PUBLISH (d0, q0, r0, m0, 'Source/AIS/AISHuB', ... (46 bytes))

Server Log for this client:

cat server-58985.log | grep "mosqsub/58985" | grep -v "PUBLISH"

1452806532: New client connected from 100.35.218.11 as mosqsub/58985-GMG-MB-Pr (c1, k60).
1452806532: Sending CONNACK to mosqsub/58985-GMG-MB-Pr (0, 0)
1452806532: Received SUBSCRIBE from mosqsub/58985-GMG-MB-Pr
1452806532: Sending SUBACK to mosqsub/58985-GMG-MB-Pr
1452806592: Received PINGREQ from mosqsub/58985-GMG-MB-Pr
1452806592: Sending PINGRESP to mosqsub/58985-GMG-MB-Pr
1452806652: Socket error on client mosqsub/58985-GMG-MB-Pr, disconnecting.
1452806653: New client connected from 100.35.218.11 as mosqsub/58985-GMG-MB-Pr (c1, k60).
1452806653: Sending CONNACK to mosqsub/58985-GMG-MB-Pr (0, 0)
1452806653: Received SUBSCRIBE from mosqsub/58985-GMG-MB-Pr
1452806653: Sending SUBACK to mosqsub/58985-GMG-MB-Pr

Note on above: The PINGREQ was exactly 60 seconds after SUBACK, although the PUBLISH messages stopped a number of seconds earlier.

Note on above: The disconnect was similarly exactly 60 seconds after the PINGREQ/PINGRESP sequence. No PUBLISH messages displayed durng this interval.

Note on below: The server was logging PUBLISH messages in the same second -- before and after -- PINGREQ/PINGRESP, but nothing was displayed on mosquitto_sub terminal:

1452806592: Sending PUBLISH to mosqsub/58985-GMG-MB-Pr (d0, q0, r0, m0, 'Source/AIS/AISHuB', ... (47 bytes))
1452806592: Sending PUBLISH to mosqsub/58985-GMG-MB-Pr (d0, q0, r0, m0, 'Source/AIS/AISHuB', ... (113 bytes))
1452806592: Received PINGREQ from mosqsub/58985-GMG-MB-Pr
1452806592: Sending PINGRESP to mosqsub/58985-GMG-MB-Pr
1452806592: Sending PUBLISH to mosqsub/58985-GMG-MB-Pr (d0, q0, r0, m0, 'Source/AIS/JC', ... (47 bytes))
1452806592: Sending PUBLISH to mosqsub/58985-GMG-MB-Pr (d0, q0, r0, m0, 'Source/AIS/AISHuB', ... (34 bytes))

Thus we conclude there is an issue after logging outbound messages that is causing them not to reach the session socket correctly.

Similar situation with q=1
Display of messages stops, subsequent PINGREQ, nothing displayed for 60 seconds, then disconnect

Trace of mosquitto_sub client that stalled and subsequently sent a PINGREQ

mosquitto_sub -h 54.172.21.249 -t "Source/AIS/AISHuB" -v -d -q 1

Client mosqsub/57156-GMG-MB-Pr received PUBLISH (d0, q1, r0, m16680, 'Source/AIS/AISHuB', ... (47 bytes)) Client mosqsub/57156-GMG-MB-Pr sending PUBACK (Mid: 16680) Source/AIS/AISHuB !AIVDM,1,1,,A,15NDTLgP1Gre39hG@hhWo?wR2>`<,0_71 Client mosqsub/57156-GMG-MB-Pr received PUBLISH (d0, q1, r0, m16681, 'Source/AIS/AISHuB', ... (65 bytes)) Client mosqsub/57156-GMG-MB-Pr sending PUBACK (Mid: 16681) Source/AIS/AISHuB !AIVDM,1,1,,A,E02E340W6@1WPab3bPa200000000:usB?9TV@00003v010,4_1F Client mosqsub/57156-GMG-MB-Pr received PUBLISH (d0, q1, r0, m16682, 'Source/AIS/AISHuB', ... (46 bytes)) Client mosqsub/57156-GMG-MB-Pr sending PUBACK (Mid: 16682) Source/AIS/AISHuB !AIVDM,1,1,,B,Dh3OwjPflnfpMaF9HNAF9HNqF9H,2*63 Client mosqsub/57156-GMG-MB-Pr sending PINGREQ ^C ---

Server Config:

Place your local configuration in /etc/mosquitto/conf.d/

pid_file /mqtt/data/mosquitto.pid

persistence true
persistence_location /mqtt/data/

max_inflight_messages 1000

max_queued_messages 40000

queue_qos0_messages true

message_size_limit 8192

persistent_client_expiration 2d
autosave_interval 1800

user ubuntu

log_dest file /mqtt/log/mosquitto.log
log_type debug
log_type error
log_type warning
log_type notice
connection_messages true
log_timestamp true

include_dir /mqtt/config/conf.d


Stats:

$SYS/broker/version mosquitto version 1.4.7
$SYS/broker/timestamp Tue, 22 Dec 2015 12:47:28 +0000
$SYS/broker/uptime 2959 seconds
$SYS/broker/clients/total 6
$SYS/broker/clients/inactive 0
$SYS/broker/clients/disconnected 0
$SYS/broker/clients/active 6
$SYS/broker/clients/connected 6
$SYS/broker/clients/expired 0
$SYS/broker/messages/stored 137
$SYS/broker/messages/received 2116842
$SYS/broker/messages/sent 2151495
$SYS/broker/subscriptions/count 4
$SYS/broker/retained messages/count 47
$SYS/broker/heap/current 42224
$SYS/broker/heap/maximum 3738960
$SYS/broker/publish/messages/dropped 0
$SYS/broker/publish/messages/received 1021941
$SYS/broker/publish/messages/sent 34633
$SYS/broker/publish/bytes/received 58142385
$SYS/broker/publish/bytes/sent 62518744
$SYS/broker/bytes/received 86347945
$SYS/broker/bytes/sent 90662285
$SYS/broker/load/messages/received/1min 42973.26
$SYS/broker/load/messages/received/5min 42390.26
$SYS/broker/load/messages/received/15min 40997.44
$SYS/broker/load/messages/sent/1min 43135.45
$SYS/broker/load/messages/sent/5min 42651.92
$SYS/broker/load/messages/sent/15min 41628.45
$SYS/broker/load/publish/received/1min 20687.61
$SYS/broker/load/publish/received/5min 20424.22
$SYS/broker/load/publish/received/15min 19793.77
$SYS/broker/load/publish/sent/1min 157.93
$SYS/broker/load/publish/sent/5min 261.30
$SYS/broker/load/publish/sent/15min 630.87
$SYS/broker/load/bytes/received/1min 1747734.36
$SYS/broker/load/bytes/received/5min 1728984.96
$SYS/broker/load/bytes/received/15min 1674080.57
$SYS/broker/load/bytes/sent/1min 1795965.65
$SYS/broker/load/bytes/sent/5min 1783664.37
$SYS/broker/load/bytes/sent/15min 1754350.49
$SYS/broker/load/sockets/1min 0.06
$SYS/broker/load/sockets/5min 0.20
$SYS/broker/load/sockets/15min 0.14
$SYS/broker/load/connections/1min 0.06
$SYS/broker/load/connections/5min 0.20
$SYS/broker/load/connections/15min 0.14

-- end --

On 2016-01-18 16:45:06 -0500, Roger Light wrote:

Hi Gary,

Thanks for the extensive information, it's much appreciated. I can imagine that there might be a problem, but can't point you somewhere to look quite yet. One thing I'll say is that all of the pthread* calls do nothing in the broker, so that reduces the complexity a little bit.

I've not yet been able to reproduce this, although I've also not had much time to try it - I've an important work deadline approaching. Is there a simple way that I can see the problem, ideally using mosquitto_sub/pub or the Paho Python client or a client using libmosquitto or the Paho C library? Basically I'm happiest working with C/C++ or Python.

Thanks,

Roger

On 2016-01-19 12:39:22 -0500, Gary Gershon wrote:

Hi Roger,

The server has a public IP. I will email you the address and topic string.

I can reproduce the problem by starting two mosquitto_sub clients using OS X Terminal.

After a minute they stall and subsequently timeout and reconnect.

It's possible you might simulate my publisher using a Python or C test harness and thus drive your own broker instance. My publisher is a Paho Java client sending messages from about 30 to 120 bytes average 60, and averaging 20,000/minute. My publisher is in a Docker container adjacent to the broker, so there would be close to zero network latency.

Another option would be for me to redirect my publisher to your 1.4.7 test broker instance, but this would add network delays. I have no idea if that would matter, although it might be interesting to find out.

Best,
Gary

On 2016-01-19 18:23:52 -0500, Roger Light wrote:

Thanks for the server details, but I think I really need to fiddle with the server to see what is up.

Is your publisher just publishing to that single topic? I'll see about knocking up a test client.

mosquitto doesn't build in parallel mode

migrated from Bugzilla #463884
status RESOLVED severity normal in component Mosquitto for 1.4
Reported in version 1.4 on platform PC
Assigned to: Roger Light

On 2015-04-03 07:10:12 -0400, Gianfranco Costamagna wrote:

The patch is attached, just a problem with cmake doesn't make it handle correctly the mosquitto dependency

diff --git a/lib/cpp/CMakeLists.txt b/lib/cpp/CMakeLists.txt
index 68b1453..0a9fd91 100644
--- a/lib/cpp/CMakeLists.txt
+++ b/lib/cpp/CMakeLists.txt
@@ -5,7 +5,7 @@ link_directories(${mosquitto_BINARY_DIR}/lib)
add_library(mosquittopp SHARED
mosquittopp.cpp mosquittopp.h)

-target_link_libraries(mosquittopp mosquitto)
+target_link_libraries(mosquittopp libmosquitto)
set_target_properties(mosquittopp PROPERTIES
VERSION ${VERSION}
SOVERSION 1

On 2015-04-09 15:09:44 -0400, Roger Light wrote:

Thanks for the patch. Could you please add a comment stating that you comply with the terms here: http://www.eclipse.org/legal/CoO.php

After you have done this I can accept the patch.

On 2015-04-09 15:16:52 -0400, Gianfranco Costamagna wrote:

Yes, you have my permission, and I comply with the terms.

cheers,

G.

On 2015-04-11 07:16:06 -0400, Roger Light wrote:

This will be fixed in 1.4.2, thanks for the patch.

On 2015-05-20 04:57:34 -0400, Gianfranco Costamagna wrote:

thanks to you!

[PATCH] Crossing compile websockets.c

migrated from Bugzilla #475807
status RESOLVED severity normal in component Mosquitto for 1.4
Reported in version 1.4 on platform PC
Assigned to: Roger Light

On 2015-08-25 09:06:03 -0400, Tyler Brandon wrote:

diff --git a/src/Makefile b/src/Makefile
index 2cfb7d4..2bc70de 100644
--- a/src/Makefile
+++ b/src/Makefile
@@ -90,7 +90,7 @@ util_mosq.o : ../lib/util_mosq.c ../lib/util_mosq.h
${CROSS_COMPILE}${CC} $(BROKER_CFLAGS) -c $&lt; -o $@

websockets.o : websockets.c mosquitto_broker.h

  •   ${CC} $(BROKER_CFLAGS) -c $< -o $@
    
  •   ${CROSS_COMPILE}${CC} $(BROKER_CFLAGS) -c $< -o $@
    

    will_mosq.o : ../lib/will_mosq.c ../lib/will_mosq.h
    ${CROSS_COMPILE}${CC} $(BROKER_CFLAGS) -c $&lt; -o $@

On 2015-09-16 16:04:26 -0400, Roger Light wrote:

*** Bug 475808 has been marked as a duplicate of this bug. ***

On 2015-09-16 16:15:03 -0400, Roger Light wrote:

Thanks, this was fixed in the develop branch but I'm not sure why I didn't get it into the last bugfix release.

I've committed a fix for it.

On 2015-10-03 12:25:59 -0400, testato testato wrote:

today i installed last Mosquitto 1.4.4 on a Xubuntu distribution whitout problem. Websocket connection work

But I install also on a CentOS server, using mosquitto official repo, and when i start Mosquitto i receive this message in the terminal:

mosquitto -c /etc/mosquitto/mosquitto.conf
Error: Websockets support not available.
Error found at /etc/mosquitto/mosquitto.conf:14.
Error: Unable to open configuration file.

I tried removing the websocket from config file and it work on standard mqtt portocol

Is it related to this Bug ?

On 2015-10-03 18:52:32 -0400, Roger Light wrote:

No, it is not related to this bug.

I'm afraid that the packages for CentOS do not provide websockets support because the required library (libwebsockets) isn't available in CentOS.

On 2015-10-04 05:31:30 -0400, testato testato wrote:

but is it possible compile and install it,
so why not enable in mosquitto the websocket option ?
the people that need websocket on CentOS must only recompile the libwebsocket, and can use directly the mosquitto stable 1.4.4 package without touch it.

Do you think it is better open an issue ? where i must open it ?

Thanks

Port to event loop library

migrated from Bugzilla #470008
status NEW severity enhancement in component Mosquitto for 1.4
Reported in version unspecified on platform All
Assigned to: Roger Light

On 2015-06-11 17:42:07 -0400, Roger Light wrote:

Porting to use an event loop library such as libevent, libuv will allow the best performance on all supported platforms. At the moment using poll() is a bottleneck, this would reduce the bottleneck.

It also gives an opportunity to tidy up the existing code.

config.mk: wrong BROKER_LIBS for FreeBSD

migrated from Bugzilla #485131
status RESOLVED severity trivial in component Mosquitto for 1.4
Reported in version 1.4 on platform PC
Assigned to: Roger Light

On 2016-01-04 07:13:05 -0500, Peter Morjan wrote:

On FreeBSD BROKER_LIBS should not include '-ldl'
The following patch solves the issue:

diff --git a/config.mk b/config.mk
index 01532c3..9eb5fe5 100644
--- a/config.mk
+++ b/config.mk
@@ -115,7 +115,7 @@ LIB_LDFLAGS:=${LDFLAGS}
BROKER_CFLAGS:=${LIB_CFLAGS} ${CPPFLAGS} -DVERSION=""${VERSION}"" -DTIMESTAMP=""${TIMESTAMP}"" -DWITH_BROKER
CLIENT_CFLAGS:=${CFLAGS} ${CPPFLAGS} -I../lib -DVERSION=""${VERSION}""

-ifneq ($(or $(find $(UNAME),FreeBSD), $(find $(UNAME),OpenBSD)),)
+ifneq ($(or $(findstring $(UNAME),FreeBSD), $(findstring $(UNAME),OpenBSD)),)
BROKER_LIBS:=-lm
else
BROKER_LIBS:=-ldl -lm

On 2016-02-11 16:53:00 -0500, Roger Light wrote:

Thanks, I've fixed this in the fixes branch.

Support multiple authentication plugins

migrated from Bugzilla #464543
status ASSIGNED severity enhancement in component Mosquitto for 1.5
Reported in version 1.4 on platform PC
Assigned to: Roger Light

On 2015-04-13 15:31:11 -0400, Roger Light wrote:

This was posted on the mailing list:

My largest gripe with the plugin interface as it is right now is that only one
plugin can be active at any given time. The krb5 plugin is intended to play a
largely supplementary role, yet the only option at the moment is to have it
replace existing authentication. Rather than split up the plugins, I realise
authentication and authorisation are usually tightly coupled, and I would
therefore suggest the option of allowing multiple plugins at once.

My suggestion is inspired by PAM, which allows a plugin to either grant access
immediately, deny access immediately or to "defer" action. Now, PAM has a much
more complicated setup, but these three actions should suffice for mosquitto, in
my opinion. The authentication/authorisation workflow would consist of the
following steps.

  • If the plugin returns MOSQ_AUTH_GRANTED, access is immediately granted, the
    user authenticated correctly, obtained the requested access if this is an ACL
    call, or provided a correct TLS-PSK.
  • If the plugin returns MOSQ_AUTH_DENIED, access is immediately denied, the
    plugin has an explicit rule to deny this action, either this user does not
    match some criteria but does exist, the password did not match or access to
    this topic is definitely denied.
  • If the plugin instead returns MOSQ_AUTH_DEFERRED, the next plugin in the
    (configuration) list is invoked.
  • If at the end of authentication/authorisation the status is
    MOSQ_AUTH_DEFERRED, there was no plugin granting access, so it is then
    denied.

This, to me, seems like a relatively small modification to the plugin
architecture that would greatly enhance the possibilities the owner of a broker
has. In my case, I would first allow krb5-based authentication, then fall back
to the inbuilt mosquitto behaviour, allowing private key authentication and
lastly, passwords. Similarly, since the krb5 plugin has no information about
ACL, it would immediately return MOSQ_AUTH_DEFERRED, and the normal ACL file
would get used.

On 2015-06-04 10:20:22 -0400, Jan-Piet Mens wrote:

What would be very good to have is support for ACLs/passwords from files and authentication plugins. I'd like to see files checked first and then auth plugins.

On 2015-06-29 18:33:57 -0400, Roger Light wrote:

This is now available for testing on the develop branch. I would appreciate any feedback.

Please note that this is not the only change to plugins that is planned for 1.5.

On 2015-09-29 06:04:35 -0400, Jan-Piet Mens wrote:

As discussed on IRC, I strongly feel that the order should be swapped: first files, then authentication plugins. The reason is that files checking will typically be "cheaper" than whatever an authentication plugin does. For example, we want to keep administrative accounts out of plugin (-databases) so we don't even want the plugin to be invoked for these (even if the plugin can defer -- if it knows what to defer).

Memory is not freed to kernel

migrated from Bugzilla #486028
status CLOSED severity normal in component Mosquitto for 1.4
Reported in version unspecified on platform PC
Assigned to: Roger Light

On 2016-01-18 05:30:17 -0500, Pierre Fersing wrote:

We had an issue with Mosquitto that don't freed memory after using it. On a our server, this means that Mosquitto currently use 4 GB of memory... for an mostly empty broker.

We are in this situation because few days earlier (~ 3 days) our consumer stopped consuming message. Since we use persistent session Mosquitto kept those messages (in-memory + in persistence file). The file was about 4 GB which match the Mosquitto process consuming 4 Gb at that time. After our consumer processing the backlog,
the persistence file is now few Kb, but Mosquitto still use all 4 GB of RAM.

I'm not sure it's a memory-leak like a missing free(), the Mosquitto memory RSS on our server is not growing, it's stable at 4 GB.

While trying to reproduce this situation, I added some printf that showed the current RSS usage. Sometime the RSS was the same just before and just after a call to _mosquitto_free(store->payload).

My best guess would be that malloc/free don't always return memory to kernel (Google seems to confirm this). I didn't find more information on when free() really free the memory and when it does not.

It would be great if Mosquitto could freed that memory and not stay at memory peak forever.

Version used:

Mosquitto : 1.4.7
libc : Ubuntu Trusty libc6 2.19-0ubuntu6.6
Kernel : Ubuntu Trusty linux-image-virtual-lts-wily 4.2.0.23.17
uname : 4.2.0-23-generic

On 2016-01-18 16:30:25 -0500, Roger Light wrote:

Hi Pierre,

I've been very careful to ensure that there are no memory leaks in mosquitto. Some of the fixes in the 1.4.x releases were related to the case where memory was not leaked, but could have been freed because they were not needed any more. As far as I am aware now, all memory that is not needed is free()d as soon as possible.

What happens after this point is beyond my control. As you've suggested, free() does not release memory to the OS on most platforms. This means that on the next malloc(), memory is much quicker to acquire.

You could look into using an alternative malloc/free implementation that does return the memory to the OS, but first I'd suggest considering whether this is really a problem. If you have spikes of usage up to 4 GB, but most of the time are at a few KB, then the memory not in use should be swapped out to disk by your OS. Unless you have a shortage of swap space it should not influence other programs.

On 2016-02-11 16:32:41 -0500, Roger Light wrote:

I'm closing this as "not eclipse", because it's essentially nothing that mosquitto can do anything about - it's down to the malloc implementation.

If you disagree, please reopen the bug providing more details.

Authentication plugin should not block the broker

migrated from Bugzilla #472050
status NEW severity enhancement in component Mosquitto for 1.4
Reported in version 1.4 on platform All
Assigned to: Roger Light

On 2015-07-07 08:20:28 -0400, Roger Light wrote:

Network based authentication lookups can cause a big delay and hence have substantial negative impact on the performance of the broker. This needs an asynchronous approach.

Thread on mailing list: https://dev.eclipse.org/mhonarc/lists/mosquitto-dev/msg00714.html

mosquitto crashed by segfault on CentOS (v1.4.1~v1.4.2)

migrated from Bugzilla #470534
status CLOSED severity normal in component Mosquitto for 1.4
Reported in version unspecified on platform PC
Assigned to: Roger Light

On 2015-06-18 21:40:59 -0400, Taro Kawazoe wrote:

I installed mosquitto via yum on CentOS 6.5 (yum repository: home_oojah_mqtt).
And I've encountered this issue as below, and looking for a solution.
Any hints please?

mosquitto v1.4.1 crashes from time to time under following segfault:

Jun 8 12:51:54 ****** kernel: mosquitto[27486]: segfault at 0 ip 00007fde8c92382c sp 00007fff79e70438 error 4 in libc-2.12.so[7fde8c7eb000+18a000]
Jun 10 20:57:01 ****** kernel: mosquitto[20195]: segfault at 0 ip 00007f08f49cc82c sp 00007fff9db8e568 error 4 in libc-2.12.so[7f08f4894000+18a000]

By the way, I've also found similar issues in Debian cases:

So I upgraded mosquitto from v1.4.1 to v1.4.2 via yum as well,
however mosquitto crashed again.

Jun 14 10:28:02 ****** kernel: mosquitto[21437]: segfault at 0 ip 00007fe6fedde82c sp 00007fffd9e49ed8 error 4 in libc-2.12.so[7fe6feca6000+18a000]

On 2015-06-21 16:01:20 -0400, Roger Light wrote:

I regret that the segfault information doesn't give any useful information. I'll install CentOS 6.5 in a virtual machine and see if I can reproduce it. Can you give any hints as to what the behaviour of your clients is (i.e. topics, size of payload, qos, clean session or not, anything you can provide).

On 2015-06-23 23:51:44 -0400, Taro Kawazoe wrote:

I have informations that mosquitto output only following application log whenever it crashed:

1433735514: Saving in-memory database to /var/lib/mosquitto/mosquitto.db.
(= Jun 8 12:51:54)
1433937421: Saving in-memory database to /var/lib/mosquitto/mosquitto.db.
(= Jun 10 20:57:01)
1434245282: Saving in-memory database to /var/lib/mosquitto/mosquitto.db.
(= Jun 14 10:28:02)

I don't have any more information because mosquitto have not crashed since Jun 14.
So, I'm sorry that I don't know condition of topic, payload yet.

I don't know whether useful information, but I describe contents of mosquitto.conf.

#mosquitto.conf

retry_interval 60
pid_file /var/run/mosquitto.pid
max_inflight_messages 100
max_queued_messages 100
queue_qos0_messages false
message_size_limit 0
persistent_client_expiration 30d
allow_duplicate_messages true
upgrade_outgoing_qos false
port 1883
max_connections -1
persistence true
persistence_location /var/lib/mosquitto/

log_dest stderr
log_type error
log_type warning
log_type notice
log_type information
log_type debug
connection_messages true

allow_anonymous false
password_file /etc/mosquitto/conf.d/passwd.conf
acl_file /etc/mosquitto/conf.d/acl.conf

Thanks.

On 2015-08-17 16:57:43 -0400, Roger Light wrote:

I'm closing this bug as invalid, because there isn't enough information to work with. If there is more you can add, please reopen the bug.

Add systemd support

migrated from Bugzilla #471053
status RESOLVED severity enhancement in component Mosquitto for 1.5
Reported in version unspecified on platform PC
Assigned to: Roger Light

On 2015-06-25 16:48:06 -0400, Roger Light wrote:

Add systemd support on Linux.

On 2015-06-26 06:45:50 -0400, Eclipse Genie wrote:

Gerrit change https://git.eclipse.org/r/50862 was merged to [develop].
Commit: http://git.eclipse.org/c/mosquitto/org.eclipse.mosquitto.git/commit/?id=SHA: 29731b5

On 2015-06-26 06:47:13 -0400, Roger Light wrote:

Patch provide through gerrit has been merged.

What about a 64 bit version for Windows?

migrated from Bugzilla #477976
status UNCONFIRMED severity normal in component Mosquitto for 1.4
Reported in version 1.4 on platform PC
Assigned to: Roger Light

On 2015-09-21 12:32:59 -0400, Lorenzo Maiorfi wrote:

It would be great! :)

200ms delay in publishing

migrated from Bugzilla #482713
status UNCONFIRMED severity normal in component Mosquitto for 1.4
Reported in version 1.4 on platform PC
Assigned to: Roger Light

On 2015-11-20 12:26:47 -0500, Cefn Hoile wrote:

Greetings and thanks for all your work on Mosquitto as a reference MQTT Websockets server in our work. Any ideas what we can be getting wrong in our build or configuration of Mosquitto to create HUGE delays in dispatch? Had a good go at investigating alternative configurations, but suspect there's something obvious we've missed.

We built 1.4.2 against libwebsockets.so.3 -> libwebsockets.so.3.0.0 with only two changes - adding websocket support (WITH_WEBSOCKETS:=yes) and removing SRV (WITH_SRV:=no)

However, we're really surprised to see a consistent 200ms delay in sending through Mosquitto versus Mosca which you can see in the logs below. What takes 147 milliseconds for Mosca seems to take 2200 milliseconds for Mosquitto! Of course with QoS2 this doubles.

Comma-separated timestamps are shown with 'absolute', 'relative to the start of the test', and 'relative to the last logged event' in milliseconds. In both cases, the client and server are co-hosted on localhost.

Our test suite has an MQTT.js websockets client which subscribes globally, sends a single message, then each time it gets notified of its last message, it sends the next message in the series, timestamping the events to calculate round trip time. The only difference between the Mosca and Mosquitto test cases are the launchServer and killServer implementations (separate implementation provided for each flavour of server).

You can see the test code for "Can ping back messages to test roundtrip time" at https://gist.github.com/cefn/be716b3099ba2194b473

The mosquitto.conf we are using looks like...

listener 3000 127.0.0.1
protocol websockets
<<<<<

MOSQUITTO Log
1448039685811:0 Connected
1448039686010:199:199 Subscribed
1448039686011:200:1 Sending msg:0
1448039686209:398:198 Received msg:0
1448039686210:399:1 Sending msg:1
1448039686409:598:199 Received msg:1
1448039686409:598:0 Sending msg:2
1448039686608:797:199 Received msg:2
1448039686608:797:0 Sending msg:3
1448039686808:997:200 Received msg:3
1448039686809:998:1 Sending msg:4
1448039687009:1198:200 Received msg:4
1448039687009:1198:0 Sending msg:5
1448039687209:1398:200 Received msg:5
1448039687209:1398:0 Sending msg:6
1448039687410:1599:201 Received msg:6
1448039687410:1599:0 Sending msg:7
1448039687610:1799:200 Received msg:7
1448039687610:1799:0 Sending msg:8
1448039687811:2000:201 Received msg:8
1448039687811:2000:0 Sending msg:9
1448039688012:2201:201 Received msg:9

MOSCA Log
1448039633828:0 Connected
1448039633854:26:26 Subscribed
1448039633854:26:0 Sending msg:0
1448039633873:45:19 Received msg:0
1448039633873:45:0 Sending msg:1
1448039633888:60:15 Received msg:1
1448039633888:60:0 Sending msg:2
1448039633902:74:14 Received msg:2
1448039633902:74:0 Sending msg:3
1448039633910:82:8 Received msg:3
1448039633911:83:1 Sending msg:4
1448039633919:91:8 Received msg:4
1448039633919:91:0 Sending msg:5
1448039633930:102:11 Received msg:5
1448039633930:102:0 Sending msg:6
1448039633942:114:12 Received msg:6
1448039633942:114:0 Sending msg:7
1448039633954:126:12 Received msg:7
1448039633954:126:0 Sending msg:8
1448039633966:138:12 Received msg:8
1448039633966:138:0 Sending msg:9
1448039633975:147:9 Received msg:9

Hope this test case is digestible and happy to investigate by running further diagnostics following your guidance if you have the time to look into it.

Thanks,

Cefn

"Official" CentOS 7 mosquitto 1.4.4 RPM lacks ECC support

migrated from Bugzilla #478263
status RESOLVED severity normal in component Mosquitto for 1.4
Reported in version 1.4 on platform PC
Assigned to: Roger Light

On 2015-09-24 03:29:03 -0400, Ville Warsta wrote:

Mosquitto 1.4.4 RPM from the repo http://download.opensuse.org/repositories/home:/oojah:/mqtt/CentOS_CentOS-7/home:oojah:mqtt.repo does not support ECC cipher suites.

Is this intentional or perhaps the build machine has an outdated version of openssl-devel?

Ciphers with the prebuilt RPM from the repo above and "tls_version tlsv1.2" in mosquitto configuration:

$ nmap -sV -PN -p 8883 x.x.x.x --script ssl-enum-ciphers

Starting Nmap 6.40 ( http://nmap.org ) at 2015-09-24 10:11 EEST
Nmap scan report for x.x.x.x
Host is up (0.032s latency).
PORT STATE SERVICE VERSION
8883/tcp open ssl/unknown
| ssl-enum-ciphers:
| SSLv3: No supported ciphers found
| TLSv1.2:
| ciphers:
| TLS_RSA_WITH_3DES_EDE_CBC_SHA - strong
| TLS_RSA_WITH_AES_128_CBC_SHA - strong
| TLS_RSA_WITH_AES_128_CBC_SHA256 - strong
| TLS_RSA_WITH_AES_128_GCM_SHA256 - strong
| TLS_RSA_WITH_AES_256_CBC_SHA - strong
| TLS_RSA_WITH_AES_256_CBC_SHA256 - strong
| TLS_RSA_WITH_AES_256_GCM_SHA384 - strong
| TLS_RSA_WITH_CAMELLIA_128_CBC_SHA - strong
| TLS_RSA_WITH_CAMELLIA_256_CBC_SHA - strong
| TLS_RSA_WITH_IDEA_CBC_SHA - weak
| TLS_RSA_WITH_RC4_128_MD5 - strong
| TLS_RSA_WITH_RC4_128_SHA - strong
| TLS_RSA_WITH_SEED_CBC_SHA - strong
| compressors:
| NULL
|_ least strength: weak

Ciphers with an RPM built in a CentOS 7 machine from the 1.4.4 SRPM (no additional patches) and "tls_version tlsv1.2" in mosquitto configuration:

$ nmap -sV -PN -p 8883 x.x.x.x --script ssl-enum-ciphers

Starting Nmap 6.40 ( http://nmap.org ) at 2015-09-24 10:10 EEST
Nmap scan report for x.x.x.x
Host is up (0.019s latency).
PORT STATE SERVICE VERSION
8883/tcp open ssl/unknown
| ssl-enum-ciphers:
| SSLv3: No supported ciphers found
| TLSv1.2:
| ciphers:
| TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA - strong
| TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA - strong
| TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 - strong
| TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 - strong
| TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA - strong
| TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 - strong
| TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 - strong
| TLS_ECDHE_RSA_WITH_RC4_128_SHA - strong
| TLS_RSA_WITH_3DES_EDE_CBC_SHA - strong
| TLS_RSA_WITH_AES_128_CBC_SHA - strong
| TLS_RSA_WITH_AES_128_CBC_SHA256 - strong
| TLS_RSA_WITH_AES_128_GCM_SHA256 - strong
| TLS_RSA_WITH_AES_256_CBC_SHA - strong
| TLS_RSA_WITH_AES_256_CBC_SHA256 - strong
| TLS_RSA_WITH_AES_256_GCM_SHA384 - strong
| TLS_RSA_WITH_CAMELLIA_128_CBC_SHA - strong
| TLS_RSA_WITH_CAMELLIA_256_CBC_SHA - strong
| TLS_RSA_WITH_IDEA_CBC_SHA - weak
| TLS_RSA_WITH_RC4_128_MD5 - strong
| TLS_RSA_WITH_RC4_128_SHA - strong
| TLS_RSA_WITH_SEED_CBC_SHA - strong
| compressors:
| NULL
|_ least strength: weak

On 2015-09-24 04:20:18 -0400, Roger Light wrote:

The builds were set to disable EC, I can't remember why though. I've enabled it again and the build has succeeded so this should be fixed now.

Thanks for the report.

On 2015-11-20 09:35:27 -0500, Ville Warsta wrote:

Sorry for the delay; I recently tried with the latest CentOS7 RPM (mosquitto-1.4.5-1.1.x86_64.rpm) and it seems that ECDHE is still not possible.

protocol errors for QoS1 and Qos2 in a congested environment

migrated from Bugzilla #486892
status CLOSED severity major in component Mosquitto for 1.4
Reported in version 1.4 on platform All
Assigned to: Roger Light

On 2016-01-30 11:05:20 -0500, Christoph Krey wrote:

Broker 1.4.4
Client libmosquitto 1.4.7

Congestion situation arises because initial SUBSCRIBE triggers the transmission
of about 1000 retained messages, each about 1500 bytes long.

Broker publishes Qos1 and Qos2 messages...
Client repeats PUBRECs although PUBREL was send by broker and PUBCOMP received, leading to connection close

connection re-established is closed repeatedly because of unexpected PUBREL received by broker

This situation repeats until broker decides to answer the PUBRECs with PUBRELs instead of closing the connection.

After reconnection, broker re-sends (dup=1) Qos1 messages, but sends some of them 2 or 3 times.

After reconnection, broker sends (dup=0) multiple Qos1 messages with the same messageId
without receiving PUBACK in between.

Log will be sent directly to roger

On 2016-02-11 16:55:19 -0500, Roger Light wrote:

I've not yet had chance to analyse this fully, but am closing this because you've accidently submitted two reports.

*** This bug has been marked as a duplicate of bug 486891 ***

Mosquitto cpu usage increases over time up to 100%

migrated from Bugzilla #468987
status RESOLVED severity major in component Mosquitto for 1.4
Reported in version 1.4 on platform PC
Assigned to: Roger Light

Original attachment names and IDs:

On 2015-06-01 09:15:47 -0400, Guido Hinderberger wrote:

The cpu usage of mosquitto increase continuously up to 100% and then clients are not able to connect to mosquitto anymore.

We have currently a maximum of 500 concurrent clients connected to the broker. The client is a smartphone app and it is mostly used just for 1-2 minutes. Therefore we have multiple connects/disconnects per second of different clients.

Any idea what could cause the increasing cpu usage or some hints how to get more information about the cause? The Log file is not showing any warnings or errors.

On 2015-06-02 09:08:56 -0400, Roger Light wrote:

Thanks for the report, can you provide any more details about your setup - like whether the clients are clean session or not, what qos you are using and that sort of thing.

If it's not sensitive for you, could you post the output of subscribing to $SYS/# ?

On 2015-06-02 11:48:46 -0400, Guido Hinderberger wrote:

Thanks for your quick reply.

The smartphone clients all connect with cleanSession=true.
The clients publish with QoS level 1

We have a few retained message

We have one system client which stores the message into redis and this one connects with cleanSession=false so that mosquitto keeps the if this component fails.

Here the current output of the $SYS topics:

$SYS/broker/version mosquitto version 1.4.2
$SYS/broker/timestamp 2015-05-21 18:10:14+0000
$SYS/broker/uptime 87450 seconds
$SYS/broker/clients/total 243
$SYS/broker/clients/inactive -2
$SYS/broker/clients/disconnected -2
$SYS/broker/clients/active 245
$SYS/broker/clients/connected 245
$SYS/broker/clients/expired 0
$SYS/broker/messages/stored 730
$SYS/broker/messages/received 3251181
$SYS/broker/messages/sent 3705873
$SYS/broker/subscriptions/count 1177
$SYS/broker/retained messages/count 97
$SYS/broker/heap/current 18446744073624325616
$SYS/broker/heap/maximum 18446744073709551608
$SYS/broker/publish/messages/dropped 0
$SYS/broker/publish/messages/received 9
$SYS/broker/publish/messages/sent 624018
$SYS/broker/publish/bytes/received 4481495927
$SYS/broker/publish/bytes/sent 30776416172
$SYS/broker/bytes/received 4620613781
$SYS/broker/bytes/sent 29990492420
$SYS/broker/load/messages/received/1min 3613.14
$SYS/broker/load/messages/received/5min 3417.80
$SYS/broker/load/messages/received/15min 3319.06
$SYS/broker/load/messages/sent/1min 4672.00
$SYS/broker/load/messages/sent/5min 4373.11
$SYS/broker/load/messages/sent/15min 4167.35
$SYS/broker/load/publish/sent/1min 1283.22
$SYS/broker/load/publish/sent/5min 1151.39
$SYS/broker/load/publish/sent/15min 1039.80
$SYS/broker/load/publish/received/1min 0.05
$SYS/broker/load/publish/received/5min 0.27
$SYS/broker/load/publish/received/15min 0.30
$SYS/broker/load/bytes/received/1min 5115168.40
$SYS/broker/load/bytes/received/5min 5022974.51
$SYS/broker/load/bytes/received/15min 4881339.28
$SYS/broker/load/bytes/sent/1min 54746302.60
$SYS/broker/load/bytes/sent/5min 50299327.27
$SYS/broker/load/bytes/sent/15min 46336604.26
$SYS/broker/load/sockets/1min 229.23
$SYS/broker/load/sockets/5min 215.01
$SYS/broker/load/sockets/15min 200.16
$SYS/broker/load/connections/1min 206.26
$SYS/broker/load/connections/5min 194.19
$SYS/broker/load/connections/15min 186.09

On 2015-06-02 12:57:59 -0400, Roger Light wrote:

Hmm, ok I think there must be some bugs in the $SYS collection code, otherwise I'm not sure how you ended up with 18ExaBytes of RAM and some other odd bits.

I think the best bet if you're running on Linux and it's possible, would be to perform some profiling to get to the bottom of where the problem is. The Linux perf tool does the job nicely without having a big impact. I've just set up profiling on test.mosquitto.org to see what it throws up using the command:

perf record -o /path/to/perf.data -e cycles:u /usr/local/sbin/mosquitto -c /etc/mosquitto/mosquitto.conf

This will collect user space function information (you would need an unstripped mosquitto executable) until the broker is stopped. You could then look at the profile information with

perf report -i /path/to/perf.data

Is this something you could do?

On 2015-06-02 13:15:15 -0400, Roger Light wrote:

It would be better without "-e cycles:u" actually.

On 2015-07-09 13:18:32 -0400, Martin Rauscher wrote:

Created attachment 255092
output from linux perf

Added a sample while mosquitto was running at ~20% load

On 2015-07-09 16:07:55 -0400, Roger Light wrote:

Thanks Martin - could you run

perf report -i mosq_perf.log

and post the results please? I'd need the exact same binaries to be able to use the mosq_perf.log myself.

On 2015-07-10 05:24:01 -0400, Martin Rauscher wrote:

Samples: 34K of event 'cpu-clock', Event count (approx.): 8625750000
48.44% mosquitto libc-2.17.so [.] __strcmp_sse42 �
14.04% mosquitto mosquitto [.] _retain_search.isra.3 �
12.10% mosquitto mosquitto [.] _sub_search.isra.0 �
6.86% mosquitto mosquitto [.] _sub_add �
3.74% mosquitto [kernel.kallsyms] [k] _raw_spin_unlock_irqrestore �
3.00% mosquitto [kernel.kallsyms] [k] sock_poll �
2.67% mosquitto [kernel.kallsyms] [k] tcp_poll �
1.47% mosquitto [kernel.kallsyms] [k] fget_light �
1.35% mosquitto [kernel.kallsyms] [k] do_sys_poll �
1.05% mosquitto mosquitto [.] mosquitto_main_loop �
0.68% mosquitto [kernel.kallsyms] [k] __pollwait �
0.67% mosquitto mosquitto [.] strcmp@plt �
0.36% mosquitto [kernel.kallsyms] [k] _raw_spin_lock_irqsave �
0.22% mosquitto mosquitto [.] mqtt3_db_message_timeout_check �
0.16% mosquitto [kernel.kallsyms] [k] get_page_from_freelist �
0.14% mosquitto [kernel.kallsyms] [k] fput �
0.13% mosquitto libc-2.17.so [.] vfprintf �
0.13% mosquitto [kernel.kallsyms] [k] copy_user_enhanced_fast_string �
0.13% mosquitto [kernel.kallsyms] [k] add_wait_queue �
0.13% mosquitto mosquitto [.] mqtt3_db_message_write �
0.10% mosquitto libc-2.17.so [.] __memcpy_ssse3 �
0.10% mosquitto libc-2.17.so [.] _int_malloc �
0.07% mosquitto libc-2.17.so [.] __strlen_sse42 �
0.07% mosquitto [kernel.kallsyms] [k] pvclock_clocksource_read �
0.06% mosquitto libc-2.17.so [.] __malloc_usable_size �
0.06% mosquitto libc-2.17.so [.] __memset_sse2 �
0.05% mosquitto [kernel.kallsyms] [k] tcp_sendmsg �
0.05% mosquitto [kernel.kallsyms] [k] __alloc_pages_nodemask �
0.05% mosquitto mosquitto [.] _db_subs_retain_write �
0.04% mosquitto [kernel.kallsyms] [k] __inet_lookup_established �
0.04% mosquitto [kernel.kallsyms] [k] system_call_after_swapgs �
0.03% mosquitto [kernel.kallsyms] [k] __do_softirq �
0.03% mosquitto [kernel.kallsyms] [k] ip_queue_xmit �
0.03% mosquitto [kernel.kallsyms] [k] tcp_ack �
0.03% mosquitto libc-2.17.so [.] _IO_default_xsputn �
0.03% mosquitto libc-2.17.so [.] _int_free �
0.03% mosquitto [vdso] [.] __vdso_clock_gettime �
0.03% mosquitto [kernel.kallsyms] [k] __audit_syscall_entry �
0.03% mosquitto [kernel.kallsyms] [k] ip_finish_output �
0.03% mosquitto [kernel.kallsyms] [k] tcp_transmit_skb �
0.03% mosquitto [kernel.kallsyms] [k] tcp_write_xmit �
0.03% mosquitto [kernel.kallsyms] [k] __netif_receive_skb_core �
0.02% mosquitto [kernel.kallsyms] [k] __audit_syscall_exit �
0.02% mosquitto [kernel.kallsyms] [k] release_sock �
0.02% mosquitto [kernel.kallsyms] [k] __copy_user_nocache �
0.02% mosquitto [kernel.kallsyms] [k] __set_current_blocked �
0.02% mosquitto libc-2.17.so [.] clock_gettime �
0.02% mosquitto [kernel.kallsyms] [k] __call_rcu �
0.02% mosquitto [kernel.kallsyms] [k] alloc_pages_current �
0.02% mosquitto [kernel.kallsyms] [k] kmalloc_slab �
0.02% mosquitto [kernel.kallsyms] [k] kmem_cache_alloc_node �
0.02% mosquitto [kernel.kallsyms] [k] sys_clock_gettime �
0.02% mosquitto [kernel.kallsyms] [k] tcp_recvmsg �
0.02% mosquitto [kernel.kallsyms] [k] tcp_v4_rcv �
0.02% mosquitto [kernel.kallsyms] [k] vfs_write �
0.02% mosquitto [vdso] [.] __vdso_time �
0.01% mosquitto [kernel.kallsyms] [k] dev_queue_xmit �
0.01% mosquitto [kernel.kallsyms] [k] ip_rcv �
0.01% mosquitto [kernel.kallsyms] [k] ktime_get_ts

On 2015-07-10 05:25:43 -0400, Martin Rauscher wrote:

$SUB during this time:
[root@prod01mqweb03 ~]# /usr/bin/mosquitto_sub -h localhost -p 1883 -t '$SYS/#' -
$SYS/broker/version mosquitto version 1.4.2
$SYS/broker/timestamp 2015-05-21 18:10:14+0000
$SYS/broker/connection/prod01mqweb03.prod01mqweb03/state 1
$SYS/broker/uptime 153186 seconds
$SYS/broker/clients/total 290
$SYS/broker/clients/inactive -1
$SYS/broker/clients/disconnected -1
$SYS/broker/clients/active 291
$SYS/broker/clients/connected 291
$SYS/broker/clients/expired 0
$SYS/broker/messages/stored 864
$SYS/broker/messages/received 16456101
$SYS/broker/messages/sent 17212676
$SYS/broker/subscriptions/count 1589
$SYS/broker/retained messages/count 129
$SYS/broker/heap/current 18446744073441639872
$SYS/broker/heap/maximum 18446744073709551608
$SYS/broker/publish/messages/dropped 0
$SYS/broker/publish/messages/received 145
$SYS/broker/publish/messages/sent 2471728
$SYS/broker/publish/bytes/received 16493512159
$SYS/broker/publish/bytes/sent 96417206987
$SYS/broker/bytes/received 17063373857
$SYS/broker/bytes/sent 95056614905
$SYS/broker/load/messages/received/1min 5268.20
$SYS/broker/load/messages/received/5min 5223.89
$SYS/broker/load/messages/received/15min 5279.15
$SYS/broker/load/messages/sent/1min 5447.88
$SYS/broker/load/messages/sent/5min 5378.21
$SYS/broker/load/messages/sent/15min 5435.05
$SYS/broker/load/publish/sent/1min 806.94
$SYS/broker/load/publish/sent/5min 788.03
$SYS/broker/load/publish/sent/15min 788.83
$SYS/broker/load/publish/received/1min 0.06
$SYS/broker/load/publish/received/5min 0.20
$SYS/broker/load/publish/received/15min 0.07
$SYS/broker/load/bytes/received/1min 7391590.30
$SYS/broker/load/bytes/received/5min 7680445.38
$SYS/broker/load/bytes/received/15min 7701798.55
$SYS/broker/load/bytes/sent/1min 35682301.73
$SYS/broker/load/bytes/sent/5min 35711733.29
$SYS/broker/load/bytes/sent/15min 35801685.27
$SYS/broker/load/sockets/1min 167.82
$SYS/broker/load/sockets/5min 164.04
$SYS/broker/load/sockets/15min 166.71
$SYS/broker/load/connections/1min 167.60
$SYS/broker/load/connections/5min 163.78
$SYS/broker/load/connections/15min 166.30

On 2015-07-14 11:19:09 -0400, Roger Light wrote:

Martin - is your topic tree very broad, i.e. some thousands of topics per level?

I'm aware this is a problem, someone has been working on a patch to improve performance in that case.

On 2015-07-15 12:03:47 -0400, Martin Rauscher wrote:

Hello Roger,

well, yes and no. We have many short lived topics, which depending on the implementation of mosquitto may stack up.

We have ~1 million clients. At any given time there are between 200-1000 actually connected clients. Each client has a unique (Client-)ID and has a subscription to "point2point/%c" which the server sends replies to. So, if mosquitto will not clean up an unused topic then, over time, there will be many tenthousands of topics below this one subtopic...

I see that artificially introducing topic levels may solve the issue but I don't consider this as an option.

On 2015-07-16 03:49:24 -0400, Martin Rauscher wrote:

I had a look at the source and, if I understood correctly, topics are only cleared on unsuscribe, but not when a client (with cleanSession=true) disconnects. If this is the case, then I think that is a bug and would explain the behavior we see pretty well.

On 2015-07-16 13:23:04 -0400, Roger Light wrote:

Created attachment 255258
proposed patch

I agree, that would seem a good candidate. I've attached a patch that passes tests and works for me.

On 2015-07-20 10:47:47 -0400, Michael Hekel wrote:

Hi Roger.

Thanks for your help. We tested the patch.
It seems, that the change is a little bit too extreme.

If no client at all is connected, all retained messages are deleted.

Steps:
Several clients connected
Retained message is sent
All clients disconnect
One client reconnects --> retained message is lost.

On 2015-07-25 16:11:34 -0400, Roger Light wrote:

Created attachment 255430
Updated patch

Ah yes, I wasn't checking for whether a node had a retained message. This patch does that.

On 2015-07-28 11:39:13 -0400, Michael Hekel wrote:

Hi Roger.

Thanks for the latest patch. We tried it, but it didn't work on our productive environment (works on integration, so might have something to do with high usage).

The "asdfs" log statement, which has been added in your patch was seen a lot. I will send you the mosquitto log file by mail.

Thanks for your support. I hope we can find a solution...

On 2015-08-12 04:12:52 -0400, Martin Rauscher wrote:

Created attachment 255803
3rd Patch

For completeness sake, I attached Roger's 3rd patch.

On 2015-08-19 11:25:34 -0400, Roger Light wrote:

I'm marking this as fixed.

On 2015-08-19 11:28:55 -0400, Martin Rauscher wrote:

Unfortunately the latest patch had no effect. CPU usage still continues to rise.

On 2015-08-19 13:05:02 -0400, Martin Rauscher wrote:

Still same issue:
43.61% mosquitto libc-2.17.so [.] __strcmp_sse42
15.85% mosquitto mosquitto [.] _retain_search.isra.3
14.40% mosquitto mosquitto [.] _sub_search.isra.0
7.60% mosquitto mosquitto [.] _sub_add
3.95% mosquitto [kernel.kallsyms] [k] _raw_spin_unlock_irqrestore
2.79% mosquitto [kernel.kallsyms] [k] sock_poll
2.18% mosquitto [kernel.kallsyms] [k] tcp_poll
2.06% mosquitto mosquitto [.] mosquitto_main_loop
1.22% mosquitto [kernel.kallsyms] [k] fget_light
1.18% mosquitto [kernel.kallsyms] [k] do_sys_poll
0.63% mosquitto mosquitto [.] strcmp@plt
0.41% mosquitto [kernel.kallsyms] [k] __pollwait
0.38% mosquitto [kernel.kallsyms] [k] _raw_spin_lock_irqsave
0.35% mosquitto mosquitto [.] mqtt3_db_message_timeout_check
0.26% mosquitto [kernel.kallsyms] [k] fput
0.18% mosquitto mosquitto [.] mqtt3_db_message_write

On 2015-11-02 11:21:45 -0500, Guido Hinderberger wrote:

Hi Roger,

as the last patch did not fix the problem we took a closer look into the code and want to submit a fix attempt :)

We tested the fix by dumping the subscription tree. With our small adaption of your implementation the tree seems to be cleared correctly if a client disconnects without unsubscribing.

It would be great if you could have a look into our fix attempt and verify if it makes sense.

Regards
Guido

On 2015-11-02 11:23:40 -0500, Guido Hinderberger wrote:

Created attachment 257684
fix attempt

On 2015-11-02 11:25:14 -0500, Guido Hinderberger wrote:

Current fix seems not to work. Subscription tree is growing constantly...

On 2015-11-02 16:52:06 -0500, Roger Light wrote:

Thanks Guido, I'd not forgotten about this.

Your changes look like they've found the mistakes I've made. I'm running your patch now on test.mosquitto.org.

For me to upload it would be best to you to sign the Eclipse CLA then please sign off on the patch "This contribution complies with http://www.eclipse.org/legal/CoO.php" in a bug comment and I'll commit the change.

On 2015-11-02 19:06:39 -0500, Roger Light wrote:

I'm seeing crashes so something isn't right, haven't figured out what yet though.

On 2015-11-03 06:23:07 -0500, Roger Light wrote:

Created attachment 257699
Patch 4

I've updated my patch, this is working fine on test.mosquitto.org and the additional change does make sense to me.

Could you please verify that it works for you.

On 2015-11-04 09:01:20 -0500, Guido Hinderberger wrote:

My contribution complies with http://www.eclipse.org/legal/CoO.php.

On 2015-11-04 09:05:41 -0500, Guido Hinderberger wrote:

We deployed the patched version to our production system today and if the CPU usage does not grow significantly till tomorrow the problem seems to be fixed.

On 2015-11-04 11:55:19 -0500, Roger Light wrote:

Thanks Guido.

I'll aim to do the 1.4.5 release over the weekend, so as long as I have another update from you by the end of the week that would be just fine.

On 2015-11-06 08:12:40 -0500, Guido Hinderberger wrote:

Hi Roger the CPU usage stop growing. So the problem is fixed.
Regards Guido

200 ms delay in MQTT dispatch with Mosquitto vs. Mosca

migrated from Bugzilla #482714
status UNCONFIRMED severity normal in component Mosquitto for 1.4
Reported in version 1.4 on platform PC
Assigned to: Roger Light

On 2015-11-20 12:28:36 -0500, Cefn Hoile wrote:

Greetings and thanks for all your work on Mosquitto as a reference MQTT Websockets server in our work. Any ideas what we can be getting wrong in our build or configuration of Mosquitto to create HUGE delays in dispatch? Had a good go at investigating alternative configurations, but suspect there's something obvious we've missed.

We built 1.4.2 against libwebsockets.so.3.0.0 with only two changes - adding websocket support (WITH_WEBSOCKETS) and removing SRV (WITH_SRV)

However, we're really surprised to see a consistent 200ms delay in sending through Mosquitto versus Mosca which you can see in the logs below. What takes 147 milliseconds for Mosca seems to take 2200 milliseconds for Mosquitto! Of course with QoS2 this doubles.

Comma-separated timestamps are shown with 'absolute', 'relative to the start of the test', and 'relative to the last logged event' in milliseconds. In both cases, the client and server are co-hosted on localhost.

Our test suite has an MQTT.js websockets client which subscribes globally, sends a single message, then each time it gets notified of its last message, it sends the next message in the series, timestamping the events to calculate round trip time. The only difference between the Mosca and Mosquitto test cases are the launchServer and killServer implementations (separate implementation provided for each flavour of server).

You can see the test code for "Can ping back messages to test roundtrip time" at https://gist.github.com/cefn/be716b3099ba2194b473

The mosquitto.conf we are using looks like...

listener 3000 127.0.0.1
protocol websockets

MOSQUITTO Log
1448039685811:0 Connected
1448039686010:199:199 Subscribed
1448039686011:200:1 Sending msg:0
1448039686209:398:198 Received msg:0
1448039686210:399:1 Sending msg:1
1448039686409:598:199 Received msg:1
1448039686409:598:0 Sending msg:2
1448039686608:797:199 Received msg:2
1448039686608:797:0 Sending msg:3
1448039686808:997:200 Received msg:3
1448039686809:998:1 Sending msg:4
1448039687009:1198:200 Received msg:4
1448039687009:1198:0 Sending msg:5
1448039687209:1398:200 Received msg:5
1448039687209:1398:0 Sending msg:6
1448039687410:1599:201 Received msg:6
1448039687410:1599:0 Sending msg:7
1448039687610:1799:200 Received msg:7
1448039687610:1799:0 Sending msg:8
1448039687811:2000:201 Received msg:8
1448039687811:2000:0 Sending msg:9
1448039688012:2201:201 Received msg:9

MOSCA Log
1448039633828:0 Connected
1448039633854:26:26 Subscribed
1448039633854:26:0 Sending msg:0
1448039633873:45:19 Received msg:0
1448039633873:45:0 Sending msg:1
1448039633888:60:15 Received msg:1
1448039633888:60:0 Sending msg:2
1448039633902:74:14 Received msg:2
1448039633902:74:0 Sending msg:3
1448039633910:82:8 Received msg:3
1448039633911:83:1 Sending msg:4
1448039633919:91:8 Received msg:4
1448039633919:91:0 Sending msg:5
1448039633930:102:11 Received msg:5
1448039633930:102:0 Sending msg:6
1448039633942:114:12 Received msg:6
1448039633942:114:0 Sending msg:7
1448039633954:126:12 Received msg:7
1448039633954:126:0 Sending msg:8
1448039633966:138:12 Received msg:8
1448039633966:138:0 Sending msg:9
1448039633975:147:9 Received msg:9

Hope this test case is digestible and happy to investigate by running further diagnostics following your guidance if you have the time to look into it.

Thanks,

Cefn

Topic branch renaming in bridge not consistent

migrated from Bugzilla #473286
status NEW severity normal in component Mosquitto for 1.4
Reported in version 1.4 on platform PC
Assigned to: Roger Light

On 2015-07-22 10:07:25 -0400, Jan-Piet Mens wrote:

I'm trying to rename a topic branch from, say, owntracks/jpm/5s to RAL/name.

Using 1.4.2 and 1.5-devel from Git, I see the following when publishing to owntracks/jpm/5s:

a. Using topic # in 0 RAL/ owntracks/ clients get RAL/jpm/5s which is OK.
b. Usingtopic # in 0 RAL/two/ owntracks/jpm/ clients get RAL/two/5s which is OK.
c. Usingtopic # in 0 RAL/three/ owntracks/jpm/5s/clients get RAL/three/owntracks/jpm/5s which is a BUG.

The last is quite unexpected, and should work.

On 2015-07-22 10:13:04 -0400, Jan-Piet Mens wrote:

We're seeing this behaviour both for in' and forout' connections.

On 2015-07-25 18:02:50 -0400, Roger Light wrote:

The behaviour you're expecting on is that foo/# matches against foo. Bridge topic remapping is simpler than that I'm afraid. It is literally trying to remove "owntracks/jpm/5s/" from "owntracks/jpm/5s" and failing because the former isn't a substring of the latter.

I'm not sure what to do about it at the moment. Good choice of target topic name though.

[PATCH] Crossing compile websockets.c

migrated from Bugzilla #475808
status CLOSED severity normal in component Mosquitto for 1.4
Reported in version unspecified on platform PC
Assigned to: Roger Light

On 2015-08-25 09:17:27 -0400, Tyler Brandon wrote:

diff --git a/src/Makefile b/src/Makefile
index 2cfb7d4..2bc70de 100644
--- a/src/Makefile
+++ b/src/Makefile
@@ -90,7 +90,7 @@ util_mosq.o : ../lib/util_mosq.c ../lib/util_mosq.h
${CROSS_COMPILE}${CC} $(BROKER_CFLAGS) -c $&lt; -o $@

websockets.o : websockets.c mosquitto_broker.h

  •   ${CC} $(BROKER_CFLAGS) -c $< -o $@
    
  •   ${CROSS_COMPILE}${CC} $(BROKER_CFLAGS) -c $< -o $@
    

    will_mosq.o : ../lib/will_mosq.c ../lib/will_mosq.h
    ${CROSS_COMPILE}${CC} $(BROKER_CFLAGS) -c $&lt; -o $@

On 2015-09-16 16:04:26 -0400, Roger Light wrote:

*** This bug has been marked as a duplicate of bug 475807 ***

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.