Giter Club home page Giter Club logo

naz's Introduction

naz

Codacy Badge ci codecov Code style: black

naz is an async SMPP client.
It's name is derived from Kenyan hip hop artiste, Nazizi.

SMPP is a protocol designed for the transfer of short message data between External Short Messaging Entities(ESMEs), Routing Entities(REs) and Short Message Service Center(SMSC). - Wikipedia

naz currently only supports SMPP version 3.4.
naz has no third-party dependencies and it requires python version 3.7+

naz is in active development and it's API may change in backward incompatible ways.
https://pypi.python.org/pypi/naz

Comprehensive documetion is available -> Documentation

Contents:
Installation
Usage

Features

Benchmarks

Installation

pip install naz

Usage

1. As a library

import asyncio
import naz

loop = asyncio.get_event_loop()
broker = naz.broker.SimpleBroker(maxsize=1000)
cli = naz.Client(
    smsc_host="127.0.0.1",
    smsc_port=2775,
    system_id="smppclient1",
    password="password",
    broker=broker,
)

# queue messages to send
for i in range(0, 4):
    print("submit_sm round:", i)
    msg = naz.protocol.SubmitSM(
                short_message="Hello World-{0}".format(str(i)),
                log_id="myid12345",
                source_addr="254722111111",
                destination_addr="254722999999",
            )
    loop.run_until_complete(
          cli.send_message(msg)
    )


try:
    # 1. connect to the SMSC host
    # 2. bind to the SMSC host
    # 3. send any queued messages to SMSC
    # 4. read any data from SMSC
    # 5. continually check the state of the SMSC
    tasks = asyncio.gather(
        cli.connect(),
        cli.tranceiver_bind(),
        cli.dequeue_messages(),
        cli.receive_data(),
        cli.enquire_link(),
    )
    loop.run_until_complete(tasks)
except Exception as e:
    print("exception occured. error={0}".format(str(e)))
finally:
    loop.run_until_complete(cli.unbind())
    loop.stop()

NB:
(a) For more information about all the parameters that naz.Client can take, consult the documentation here
(b) More examples can be found here
(c) if you need a SMSC server/gateway to test with, you can use the docker-compose file in this repo to bring up an SMSC simulator.
That docker-compose file also has a redis and rabbitMQ container if you would like to use those as your broker.

2. As a cli app

naz also ships with a commandline interface app called naz-cli.
create a python config file, eg;
/tmp/my_config.py

import naz
from myfile import ExampleBroker

client = naz.Client(
    smsc_host="127.0.0.1",
    smsc_port=2775,
    system_id="smppclient1",
    password="password",
    broker=ExampleBroker()
)

and a python file, myfile.py (in the current working directory) with the contents:

import asyncio
import naz

class ExampleBroker(naz.broker.BaseBroker):
    def __init__(self):
        loop = asyncio.get_event_loop()
        self.queue = asyncio.Queue(maxsize=1000, loop=loop)
    async def enqueue(self,  message):
        self.queue.put_nowait(message)
    async def dequeue(self):
        return await self.queue.get()

then run:
naz-cli --client tmp.my_config.client

	 Naz: the SMPP client.

{'event': 'naz.Client.connect', 'stage': 'start', 'environment': 'production', 'release': 'canary', 'smsc_host': '127.0.0.1', 'system_id': 'smppclient1', 'client_id': '2VU55VT86KHWXTW7X'}
{'event': 'naz.Client.connect', 'stage': 'end', 'environment': 'production', 'release': 'canary', 'smsc_host': '127.0.0.1', 'system_id': 'smppclient1', 'client_id': '2VU55VT86KHWXTW7X'}
{'event': 'naz.Client.tranceiver_bind', 'stage': 'start', 'environment': 'production', 'release': 'canary', 'smsc_host': '127.0.0.1', 'system_id': 'smppclient1', 'client_id': '2VU55VT86KHWXTW7X'}
{'event': 'naz.Client.send_data', 'stage': 'start', 'smpp_command': 'bind_transceiver', 'log_id': None, 'msg': 'hello', 'environment': 'production', 'release': 'canary', 'smsc_host': '127.0.0.1', 'system_id': 'smppclient1', 'client_id': '2VU55VT86KHWXTW7X'}
{'event': 'naz.SimpleHook.to_smsc', 'stage': 'start', 'smpp_command': 'bind_transceiver', 'log_id': None, 'environment': 'production', 'release': 'canary', 'smsc_host': '127.0.0.1', 'system_id': 'smppclient1', 'client_id': '2VU55VT86KHWXTW7X'}
{'event': 'naz.Client.send_data', 'stage': 'end', 'smpp_command': 'bind_transceiver', 'log_id': None, 'msg': 'hello', 'environment': 'production', 'release': 'canary', 'smsc_host': '127.0.0.1', 'system_id': 'smppclient1', 'client_id': '2VU55VT86KHWXTW7X'}
{'event': 'naz.Client.tranceiver_bind', 'stage': 'end', 'environment': 'production', 'release': 'canary', 'smsc_host': '127.0.0.1', 'system_id': 'smppclient1', 'client_id': '2VU55VT86KHWXTW7X'}
{'event': 'naz.Client.dequeue_messages', 'stage': 'start', 'environment': 'production', 'release': 'canary', 'smsc_host': '127.0.0.1', 'system_id': 'smppclient1', 'client_id': '2VU55VT86KHWXTW7X'}

NB:
(a) The naz config file(ie, the dotted path we pass in to naz-cli --client) is any python file that has a naz.Client instance <https://komuw.github.io/naz/client.html>_ declared in it.
(b) More examples can be found here. As an example, start the SMSC simulator(docker-compose up) then in another terminal run, naz-cli --client examples.example_config.client

To see help:

naz-cli --help

naz is an async SMPP client.     
example usage: naz-cli --client path.to.my_config.client

optional arguments:
  -h, --help            show this help message and exit
  --version             The currently installed naz version.
  --client CLIENT       The config file to use. eg: --client path.to.my_config.client

Features

1. async everywhere

SMPP is an async protocol; the client can send a request and only get a response from SMSC/server 20mins later out of band.
It thus makes sense to write your SMPP client in an async manner. We leverage python3's async/await to do so.

import naz
import asyncio

loop = asyncio.get_event_loop()
broker = naz.broker.SimpleBroker(maxsize=1000)
cli = naz.Client(
    smsc_host="127.0.0.1",
    smsc_port=2775,
    system_id="smppclient1",
    password="password",
    broker=broker,
)

2. monitoring and observability

it's a loaded term, I know.

2.1 logging

In naz you have the ability to annotate all the log events that naz will generate with anything you want.
So, for example if you wanted to annotate all log-events with a release version and your app's running environment.

import naz

logger = naz.log.SimpleLogger(
                "naz.client",
                log_metadata={ "environment": "production", "release": "v5.6.8"}
            )
cli = naz.Client(
    ...
    logger=logger,
)

and then these will show up in all log events.
by default, naz annotates all log events with smsc_host, system_id and client_id

2.2 hooks

a hook is a class with two methods to_smsc and from_smsc, ie it implements naz's BaseHook interface as defined here.
naz will call the to_smsc method just before sending data to SMSC and also call the from_smsc method just after getting data from SMSC.
the default hook that naz uses is naz.hooks.SimpleHook which does nothing but logs.
If you wanted, for example to keep metrics of all requests and responses to SMSC in your prometheus setup;

import naz
from prometheus_client import Counter

class MyPrometheusHook(naz.hooks.BaseHook):
    async def to_smsc(self, smpp_command, log_id, hook_metadata, pdu):
        c = Counter('my_requests', 'Description of counter')
        c.inc() # Increment by 1
    async def from_smsc(self,
                    smpp_command,
                    log_id,
                    hook_metadata,
                    status,
                    pdu):
        c = Counter('my_responses', 'Description of counter')
        c.inc() # Increment by 1

myHook = MyPrometheusHook()
cli = naz.Client(
    ...
    hook=myHook,
)

another example is if you want to update a database record whenever you get a delivery notification event;

import sqlite3
import naz

class SetMessageStateHook(naz.hooks.BaseHook):
    async def to_smsc(self, smpp_command, log_id, hook_metadata, pdu):
        pass
    async def from_smsc(self,
                    smpp_command,
                    log_id,
                    hook_metadata,
                    status,
                    pdu):
        if smpp_command == naz.SmppCommand.DELIVER_SM:
            conn = sqlite3.connect('mySmsDB.db')
            c = conn.cursor()
            t = (log_id,)
            # watch out for SQL injections!!
            c.execute("UPDATE SmsTable SET State='delivered' WHERE CorrelatinID=?", t)
            conn.commit()
            conn.close()

stateHook = SetMessageStateHook()
cli = naz.Client(
    ...
    hook=stateHook,
)

2.3 integration with bug trackers

If you want to integrate naz with your bug/issue tracker of choice, all you have to do is use their logging integrator.
As an example, to integrate naz with sentry, all you have to do is import and init the sentry sdk. A good place to do that would be in the naz config file, ie;
/tmp/my_config.py

import naz
from myfile import ExampleBroker

import sentry_sdk # import sentry SDK
sentry_sdk.init("https://<YOUR_SENTRY_PUBLIC_KEY>@sentry.io/<YOUR_SENTRY_PROJECT_ID>")

my_naz_client = naz.Client(
    smsc_host="127.0.0.1",
    smsc_port=2775,
    system_id="smppclient1",
    password="password",
    broker=ExampleBroker()
)

then run the naz-cli as usual:
naz-cli --client tmp.my_config.my_naz_client
And just like that you are good to go. This is what errors from naz will look like on sentry(sans the emojis, ofcourse):

naz integration with sentry

3. Rate limiting

Sometimes you want to control the rate at which the client sends requests to an SMSC/server. naz lets you do this, by allowing you to specify a custom rate limiter. By default, naz uses a simple token bucket rate limiting algorithm implemented here.
You can customize naz's ratelimiter or even write your own ratelimiter (if you decide to write your own, you just have to satisfy the BaseRateLimiter interface found here )
To customize the default ratelimiter, for example to send at a rate of 35 requests per second.

import naz

myLimiter = naz.ratelimiter.SimpleRateLimiter(send_rate=35)
cli = naz.Client(
    ...
    rate_limiter=myLimiter,
)

4. Throttle handling

Sometimes, when a client sends requests to an SMSC/server, the SMSC may reply with an ESME_RTHROTTLED status.
This can happen, say if the client has surpassed the rate at which it is supposed to send requests at, or the SMSC is under load or for whatever reason ¯_(ツ)_/¯
The way naz handles throtlling is via Throttle handlers.
A throttle handler is a class that implements the BaseThrottleHandler interface as defined here
naz calls that class's throttled method everytime it gets a throttled(ESME_RTHROTTLED) response from the SMSC and it also calls that class's not_throttled method everytime it gets a response from the SMSC and the response is NOT a throttled response.
naz will also call that class's allow_request method just before sending a request to SMSC. the allow_request method should return True if requests should be allowed to SMSC else it should return False if requests should not be sent.
By default naz uses naz.throttle.SimpleThrottleHandler to handle throttling.
The way SimpleThrottleHandler works is, it calculates the percentage of responses that are throttle responses and then denies outgoing requests(towards SMSC) if percentage of responses that are throttles goes above a certain metric.
As an example if you want to deny outgoing requests if the percentage of throttles is above 1.2% over a period of 180 seconds and the total number of responses from SMSC is greater than 45, then;

import naz

throttler = naz.throttle.SimpleThrottleHandler(sampling_period=180,
                                               sample_size=45,
                                               deny_request_at=1.2)
cli = naz.Client(
    ...
    throttle_handler=throttler,
)

5. Broker

How does your application and naz talk with each other?
It's via a broker interface. Your application queues messages to a broker, naz consumes from that broker and then naz sends those messages to SMSC/server.
You can implement the broker mechanism any way you like, so long as it satisfies the BaseBroker interface as defined here
Your application should call that class's enqueue method to -you guessed it- enqueue messages to the queue while naz will call the class's dequeue method to consume from the broker.

naz ships with a simple broker implementation called naz.broker.SimpleBroker.
An example of using that;

import asyncio
import naz

loop = asyncio.get_event_loop()
my_broker = naz.broker.SimpleBroker(maxsize=1000,) # can hold upto 1000 items
cli = naz.Client(
    ...
    broker=my_broker,
)

try:
    # 1. connect to the SMSC host
    # 2. bind to the SMSC host
    # 3. send any queued messages to SMSC
    # 4. read any data from SMSC
    # 5. continually check the state of the SMSC
    tasks = asyncio.gather(
        cli.connect(),
        cli.tranceiver_bind(),
        cli.dequeue_messages(),
        cli.receive_data(),
        cli.enquire_link(),
    )
    loop.run_until_complete(tasks)
except Exception as e:
    print("exception occured. error={0}".format(str(e)))
finally:
    loop.run_until_complete(cli.unbind())
    loop.stop()

then in your application, queue items to the queue;

# queue messages to send
for i in range(0, 4):
    msg = naz.protocol.SubmitSM(
                short_message="Hello World-{0}".format(str(i)),
                log_id="myid12345",
                source_addr="254722111111",
                destination_addr="254722999999",
            )
    loop.run_until_complete(
          cli.send_message(msg)
    )

Here is another example, but where we now use redis for our broker;

import json
import asyncio
import naz
import aioredis

class RedisExampleBroker(naz.broker.BaseBroker):
    """
    use redis as our broker.
    This implements a basic FIFO queue using redis.
    Basically we use the redis command LPUSH to push messages onto the queue and BRPOP to pull them off.
    https://redis.io/commands/lpush
    https://redis.io/commands/brpop
    You should use a non-blocking redis client eg https://github.com/aio-libs/aioredis
    """
    def __init__(self):
        self.queue_name = "myqueue"
    async def enqueue(self, item):
        _redis = await aioredis.create_redis_pool(address=("localhost", 6379))
        await _redis.lpush(self.queue_name, json.dumps(item))
    async def dequeue(self):
        _redis = await aioredis.create_redis_pool(address=("localhost", 6379))
        x = await _redis.brpop(self.queue_name)
        dequed_item = json.loads(x[1].decode())
        return dequed_item

loop = asyncio.get_event_loop()
broker = RedisExampleBroker()
cli = naz.Client(
    smsc_host="127.0.0.1",
    smsc_port=2775,
    system_id="smppclient1",
    password="password",
    broker=broker,
)

try:
    # 1. connect to the SMSC host
    # 2. bind to the SMSC host
    # 3. send any queued messages to SMSC
    # 4. read any data from SMSC
    # 5. continually check the state of the SMSC
    tasks = asyncio.gather(
        cli.connect(),
        cli.tranceiver_bind(),
        cli.dequeue_messages(),
        cli.receive_data(),
        cli.enquire_link(),
    )
    tasks = asyncio.gather(cli.dequeue_messages(), cli.receive_data(), cli.enquire_link())
    loop.run_until_complete(tasks)
except Exception as e:
    print("error={0}".format(str(e)))
finally:
    loop.run_until_complete(cli.unbind())
    loop.stop()

then queue on your application side;

# queue messages to send
for i in range(0, 5):
    print("submit_sm round:", i)
    msg = naz.protocol.SubmitSM(
                short_message="Hello World-{0}".format(str(i)),
                log_id="myid12345",
                source_addr="254722111111",
                destination_addr="254722999999",
            )
    loop.run_until_complete(
          cli.send_message(msg)
    )

6. Well written(if I have to say so myself):

Development setup

TODO

naz's People

Contributors

jwenjian avatar komuw avatar thelastofcats avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

naz's Issues

allow only class instances in config/cli

currently we allow users to either pass in:

  1. python classes
  2. python class instances
    in the cli(config.json)
    We should disallow 1 and only permit 2.

naz/cli/cli.py

Lines 116 to 128 in 7e90092

# Load custom classes #######################
# Users can either pass in:
# 1. python classes;
# 2. python class instances
# if the thing that the user passed in is a python class, we need to create a class instance.
# we'll use `inspect.isclass` to do that
# todo: test the h** out of this logic
outboundqueue = load_class(kwargs["outboundqueue"]) # this is a mandatory param
if inspect.isclass(outboundqueue):
# instantiate class instance
outboundqueue = outboundqueue()
kwargs["outboundqueue"] = outboundqueue

create smpp_event object

An smpp event can either be;
bind_transceiver, submit_sm, submit_sm_resp etc.
there are places where users of naz expect these events, eg in hooks[1]

import sqlite3
import naz

class SetMessageStateHook(naz.hooks.BaseHook):
    async def request(self, smpp_event, correlation_id):
        pass
    async def response(self, smpp_event, correlation_id):
        if smpp_event == "deliver_sm":
            conn = sqlite3.connect('mySmsDB.db')
            c = conn.cursor()
            t = (correlation_id,)
            # watch out for SQL injections!!
            c.execute("UPDATE SmsTable SET State='delivered' WHERE CorrelatinID=?", t)
            conn.commit()
            conn.close()

stateHook = SetMessageStateHook()
cli = naz.Client(
    ...
    hook=stateHook,
)

instead of users using strings;

if smpp_event == "deliver_sm":

we should give them a concrete object,

if smpp_event == naz.SMPP_EVENT_DELIVER_SM: #bikeshed the name
  1. https://github.com/komuw/naz#22-hooks

on connection loss, reconnect and also re-bind

this means we need to be testing whether we still have a connection.
Once we detect connection is lost;

  • we should call client.connect
  • also call client.transceiver_bind

maybe we should also pause all other operations as we are doing the above??

We should also do state transitions

validate input

for naz.Client validate all args passed into __init__

ie; fail early, hard and with all the glory of a stacktrace.

`registered_delivery` should default to `0b00000001`

SMPP v3.4 supports 3 different types of delivery confirmations:

  • SMSC Delivery Receipt (bits 1 and 0)
  • SME originated Acknowledgement (bits 3 and 2)
  • Intermediate Notification (bit 5)

see: section 5.2.17 of SMPP v3.4 spec document.

In naz currently, registered_delivery defaults to 0b00000101. ie, we are trying to combine type1 and type2 of delivery confirmation.

However:
Sending request for one or more types of delivery confirmation does not guarantee that it will be obliged by the SMSC. It is up to the SMSC how exactly it will implement it. - https://stackoverflow.com/questions/15067656/smpp-registered-delivery

Because of this we should default registered_delivery to 0b00000001 ie, type1 only of delivery confirmation :

registered_delivery=0b00000101, # see section 5.2.17

add generic_nack

from smpp spec section 4.3

A generic_nack response is returned in the following cases:
• Invalid command_length
If the receiving SMPP entity, on decoding an SMPP PDU, detects an invalid
command_length (either too short or too long), it should assume that the data is corrupt. In
such cases a generic_nack PDU must be returned to the message originator.
• Unknown command_id
If an unknown or invalid command_id is received, a generic_nack PDU must also be
returned to the originator.

Connection lost on await writer.drain()

This program

import asyncio
import os

os.environ["PYTHONASYNCIODEBUG"] = "1"

loop = asyncio.get_event_loop()


async def client():
    reader, writer = await asyncio.open_connection("35.173.6.94", 80, loop=loop)
    print("\n connected to localhost:9881")
    while True:
        print("writer.transport._conn_lost", writer.transport._conn_lost)
        if writer.transport._conn_lost:
            writer.close()
            reader, writer = await asyncio.open_connection("35.173.6.94", 80, loop=loop)

        req = b"hello"
        writer.write(req)
        await writer.drain()
        # import pdb

        # pdb.set_trace()

        print("\nsent request\n")

        data = await reader.read(2)
        print("\n\nread:\n")
        print(data)


loop.run_until_complete(client())

produces the error:

read:

b''
writer.transport._conn_lost 0
Executing <Handle <TaskWakeupMethWrapper object at 0x1053433d8>(<Future finis...events.py:377>) created at /usr/local/Cellar/python/3.7.0/Frameworks/Python.framework/Versions/3.7/lib/python3.7/asyncio/streams.py:408> took 1.974 seconds
Traceback (most recent call last):
  File "smpp/debug.py", line 35, in <module>
    loop.run_until_complete(client())
  File "/usr/local/Cellar/python/3.7.0/Frameworks/Python.framework/Versions/3.7/lib/python3.7/asyncio/base_events.py", line 568, in run_until_complete
    return future.result()
  File "smpp/debug.py", line 23, in client
    await writer.drain()
  File "/usr/local/Cellar/python/3.7.0/Frameworks/Python.framework/Versions/3.7/lib/python3.7/asyncio/streams.py", line 348, in drain
    await self._protocol._drain_helper()
  File "/usr/local/Cellar/python/3.7.0/Frameworks/Python.framework/Versions/3.7/lib/python3.7/asyncio/streams.py", line 202, in _drain_helper
    raise ConnectionResetError('Connection lost')
ConnectionResetError: Connection lost

clarify on window size

window size, ie how many requests can be pending before they are responded to.

as far as I know, the SMPP spec does not mandate anyone(client or server), to put a cap on the max number of un-responded requests they can allow.

memory profiling & benchmarks

since naz-cli is a long running process, we need to make sure it does not leak memory.

We should run naz over an extended period under load (say 24 or 48 hrs) and profile memory usage.

version queued message

currently, apps have to send a json object like:

item_to_enqueue = {
                "smpp_event": "submit_sm",
                "short_message": "{0}".format(message),
                "correlation_id": "{0}".format(message_id),
                "source_addr": "{0}".format(sender_id),
                "destination_addr": "{0}".format(msisdn),
            }

we should add a version to that object

rename correlation_id

Sometimes when you do something like, the following on your application side:

import asyncio
import naz

loop = asyncio.get_event_loop()
cli = naz.Client(.. )

item_to_enqueue = {
        "version": "1",
        "smpp_command": naz.SmppCommand.SUBMIT_SM,
        "short_message": "Hello World",
        "correlation_id": "myid12345",
        "source_addr": "254722111111",
        "destination_addr": "254722999999",
    }
loop.run_until_complete(outboundqueue.enqueue(item_to_enqueue))

On the naz side; it logs things as:

{
'event': 'naz.Client.send_data', 
'stage': 'start', 
'smpp_command': 'submit_sm', 
'correlation_id': '3de1004e-24cd-4f8d-bdbe-528639e22003', 
'msg': '', 
 'system_id': 'SomeID', 
'client_id': '0R99MA5LI513S6500'
}

Notice that on the app side the correlation_id is; myid12345
while on naz it is 3de1004e-24cd-4f8d-bdbe-528639e22003

which is a uuid version4

import uuid
y=uuid.UUID("3de1004e-24cd-4f8d-bdbe-528639e22003")
y.version  # 4

naz.Client.seq_log should await all responses from SMSC

We use naz.Client.seq_log;

self.seq_log = {}

to correlate SMPP sequence numbers and user supplied log_id's.

When we get a response from SMSC, we pop of the correlation from naz.Client.seq_log;

naz/naz/client.py

Line 1037 in 69e8146

log_id = self.seq_log.pop(sequence_number, None)

Think of case where:

  • we get submit_sm_resp from SMSC
  • we pop off correlation from naz.Client.seq_log
  • we get deliver_sm_resp from SMSC
  • What do we do?? We no longer have a correlation to associate the deliver_sm_resp with.

We ought to fix this.

we should not have 'correlation_id': None, '

generate unique ID if correlation_id==None but do not do that for user supplied events (eg submit_sm)

ie; we should only do this for automatic events (eg transceiver_bind, enquire_link)

implement SMPP timers that concern ESME

see; sections 2.9 and 7.2 of SMPP specification document v3.4

One of those timers is;

name: enquire_link_timer
Action_on_expiration: An enquire_link request should be initiated.
Description: This timer specifies the time lapse allowed between operations after which an SMPP entity should interrogate whether it’s peer still has an active session.
This timer may be active on either communicating SMPP entity (i.e. SMSC or ESME).

We have already kind of implemented this timer:

enquire_link_interval=300,

Maybe change it's name from enquire_link_interval to enquire_link_timer so as to match SMPP spec document.

The other timers are:

  1. session_init_timer: This timer should be active on the SMSC.
    Action_on_expiration: The network connection should be terminated.
    Description: This timer specifies the time lapse allowed between a network connection being
    established and a bind_transmitter or bind_receiver request being sent to the SMSC.
    naz does not need to implement it.
  2. inactivity_timer: can be active on both ESME & SMSC.
    Action_on_expiration: The SMPP session should be dropped.
    Description: This timer specifies the maximum time lapse allowed between transactions, after which period of inactivity, an SMPP entity may assume that the session is no longer active.
    naz does not need to implement it.
  3. response_timer: can be active on both ESME & SMSC.
    Action_on_expiration: The entity which originated the SMPP Request may assume that Request has not been processed and should take the appropriate action for the particular SMPP operation.
    Description: This timer specifies the time lapse allowed between an SMPP request and the corresponding SMPP response.
    naz does not need to implement it.

make receive messages wait period configurable

after binding to SMSC, we periodically "poll" the SMSC to receive messages.
If there are no messages, we sleep for 8seconds;

await asyncio.sleep(8)

  • We should make that value configurable
  • It should have a default value
  • The default value should be a bit bigger than 8seconds; this is because we only sleep if there was no data the last time we tried to read. If we got data the last time we tried to read, we do not sleep
  • the wait period should exponetially back-off upto a maximum AND then wrap-around

hooks should accept other metadata

Apps may want to pass additional metadata when they queue messages to Queue and they would want that metadata passed on when hooks are called.

also change log_id from typing.Optional[str] to str:

async def request(self, smpp_command: str, log_id: typing.Optional[str] = None) -> None:

implement `outbind` pdu

The purpose of the outbind operation is to allow the SMSC signal an ESME(naz in this case) to originate a bind_receiver request to the SMSC. An example of where such a facility might be applicable would be where the SMSC had outstanding messages for delivery to the ESME. SMSC should bind to the ESME by issuing an outbind request. The ESME responds with a bind_receiver request to which the SMSC will reply with a bind_receiver_resp. If the ESME does not accept the outbind session (e.g. because of an illegal system_id orpassword etc.) the ESME should disconnect the network connection. Once the SMPP session is established the characteristics of the session are that of a normal SMPP receiver session.

set `address_range` so as to get `deliver_sm`

When you send a message via submit_sm to SMSC(in this case the SMSC simulator); the SMSC logs;

428 INFO    9 Generated default validity period=190120075537000+
428 INFO    21 Assessing state of 1 messages in the OutboundQueue
429 INFO    9 :SUBMIT_SM_RESP:
429 INFO    9 Hex dump (18) bytes:
429 INFO    9 00000012:80000004:00000000:00000003:
429 INFO    9 3400
429 INFO    9 cmd_len=0,cmd_id=-2147483644,cmd_status=0,seq_no=3,message_id=4
430 INFO    9
430 INFO    9 SubmitSM processing - response written to connection

And then when naz gets the submit_sm_resp it logs;

{'event': 'naz.Client.speficic_handlers',
'stage': 'start',
'smpp_command': 'submit_sm_resp',
'correlation_id': 'myid1234-KOMU-YOLO',
'command_status': 0,
'state': 'Success',
'environment': 'staging',
'release': 'canary',
'smsc_host': '127.0.0.1',
'system_id': 'smppclient1',
'client_id': 'XV0N5M6WN6GO7Z1DB'}

{'event': 'naz.Client.parse_response_pdu',
'stage': 'end',
'smpp_command': 'submit_sm_resp',
'correlation_id': 'myid1234-KOMU-YOLO',
'command_status': 0,
'environment': 'staging',
'release': 'canary',
'smsc_host': '127.0.0.1',
'system_id': 'smppclient1',
'client_id': 'XV0N5M6WN6GO7Z1DB'}

And then the SMSC log continues;

429 INFO    21 Assessing state of 1 messages in the OutboundQueue
431 INFO    21 Message:3 state=DELIVERED
432 INFO    21 Delivery Receipt requested
433 INFO    20 addressIsServicedByReceiver(254722111111)
434 WARNING 20 Smsc: No receiver for message address to 254722111111
434 INFO    20 DELIVER_SM (receipt):
434 INFO    20 Hex dump (198) bytes:
434 INFO    20 000000C6:00000005:00000000:00000005:
434 INFO    20 434D5400:01013235:34373232:39393939:
435 INFO    20 39390001:01323534:37323231:31313131:
435 INFO    20 31000400:00000000:00000071:69643A34:
435 INFO    20 20737562:3A303031:20646C76:72643A30:
435 INFO    20 30312073:75626D69:74206461:74653A31:
435 INFO    20 39303132:30303735:3020646F:6E652064:
436 INFO    20 6174653A:31393031:32303037:35302073:
436 INFO    20 7461743A:44454C49:56524420:6572723A:
436 INFO    20 30303020:54657874:3A48656C:6C6F2057:
436 INFO    20 6F726C64:2D4B4F4D:552D594F:4C001E00:
437 INFO    20 02340004:27000102:1403000A:34343132:
437 INFO    20 33343536:3738
438 INFO    20 cmd_len=0,cmd_id=5,cmd_status=0,seq_no=5,service_type=CMT,source_addr_ton=1
438 INFO    20 source_addr_npi=1,source_addr=254722999999,dest_addr_ton=1,dest_addr_npi=1
438 INFO    20 destination_addr=254722111111,esm_class=4,protocol_ID=0,priority_flag=0
439 INFO    20 schedule_delivery_time=,validity_period=,registered_delivery_flag=0
439 INFO    20 replace_if_present_flag=0,data_coding=0,sm_default_msg_id=0,sm_length=113
439 INFO    20 short_message=id:4 sub:001 dlvrd:001 submit date:1901200750 done date:1901200750 stat:DELIVRD err:000 Text:Hello World-KOMU-YOL
439 INFO    20 TLV=30/2/3400,TLV=1063/1/02,TLV=5123/10/34343132333435363738
439 INFO    20
440 WARNING 20 InboundQueue: no active receiver object to deliver message. Application must issue BIND_RECEIVER with approriate address_range. Message has been moved to the pending queue

You can see in the SMSC logs;

INFO Delivery Receipt requested
INFO    20 addressIsServicedByReceiver(254722111111)
WARNING 20 Smsc: No receiver for message address to 254722111111
WARNING 20 InboundQueue: no active receiver object to deliver message. Application must issue BIND_RECEIVER with approriate address_range. Message has been moved to the pending queue

handle Delivery Receipt(deliver_sm) Format

The informational content of an SMSC Delivery Receipt may be inserted into the
short_message parameter of the deliver_sm operation. The format for this Delivery Receipt
message is SMSC vendor specific but following is a typical example of Delivery Receipt report:

id:IIIIIIIIII sub:SSS dlvrd:DDD submit date:YYMMDDhhmm done
date:YYMMDDhhmm stat:DDDDDDD err:E Text: . . . . . . . . .

see: Appendix B Delivery Receipt Format of SMPP spec v3.4 document

handle user exceptions

whenever we call any user supplied plugins(queues, ratelimiters, throttleHandlers etc), we should handle(log etc) any exception that they may introduce;

eg;

  1. item_to_dequeue = await self.outboundqueue.dequeue()

  2. send_request = await self.throttle_handler.allow_request()

replace naz json config file with a python file.

the current config file is a json file like;

{
    "smsc_host": "127.0.0.1",
    "smsc_port": 2775,
    "system_id": "smppclient1",
    "password": "password",
    "loglevel": "INFO"
}

There are times when you want(as an example) to fetch the password from an external security storage facility like AWS param store.
In such a case fetching the param and adding it to the config file is a hustle.

However if the config file was a normal python file, then;

MY_PASSWORD = boto3.ssm.get("/myapp/password")

{
    "smsc_host": "127.0.0.1",
    "smsc_port": 2775,
    "system_id": "smppclient1",
    "password":  MY_PASSWORD,
    "loglevel": "INFO"
}

close connection on unbind

When SMSC sends us an unbind PDU, we should;

  • respond with unbind response PDU
  • possibly close the connection

Incorrect BIND Status for given command

when we send enquire_link, we get such a response in enquire_link_resp:

{'event': 'naz.Client.speficic_handlers',  
'stage': 'start', 
'smpp_event': 'enquire_link_resp', 
'correlation_id': None,
 'command_status': 4,
 'state': 'Incorrect BIND Status for given command',
 'environment': 'myenv', 
 'smsc_host': 'host',
 'system_id': 'something', 
'client_id': 'SJeqrJJW'}

This usually happens if when binding to SMSC:

  • we bind as bind_receiver instead of bind_transceiver
  • the bind succeds
  • then we try to send data to SMSC
  • SMSC is like; you guys bound as recievers, that binding does not allow you to send data to us.

add support for `receipted_message_id` & fix correlation_handler bug

We are currently using sequence_numbers to correlate a request to smsc and a response from smsc:

naz/naz/client.py

Lines 380 to 386 in 881c1fe

# associate sequence_number with log_id.
# this will enable us to also associate responses and thus enhancing traceability of all workflows
try:
await self.correlation_handler.put(
sequence_number=sequence_number, log_id=log_id, hook_metadata=""
)
except Exception as e:

The SMSC spec has an optional parameter user_message_reference that you can send to SMSC as part of the request(submit_sm) and SMSC will send it back in the response(deliver_sm)

see section 4.4.1 & 4.6.1 of smpp spec.

Two things to note:

  1. when you send submit_sm the sequence_number you specify is returned in the submit_sm_resp.
    However, that sequence_number is not available in the deliver_sm request from SMSC.
    This means that our correlation as it currently exists;

    naz/naz/client.py

    Lines 380 to 386 in 881c1fe

    # associate sequence_number with log_id.
    # this will enable us to also associate responses and thus enhancing traceability of all workflows
    try:
    await self.correlation_handler.put(
    sequence_number=sequence_number, log_id=log_id, hook_metadata=""
    )
    except Exception as e:

    is broken.
    We need to implement user_message_reference to try and fix that

  2. When you send a user_message_reference in submit_sm, it is not returned in submit_sm_resp. it is only returned in deliver_sm.

So way forward:

  • continue using our current correlation, but only to correlate submit_sm and submit_sm_resp
  • implement receipted_message_id to correlate submit_sm and deliver_sm

One is a bug while the second one is a feature request.

We should at the very least fix the bug

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.