Giter Club home page Giter Club logo

smppex's Introduction

Smppex

Smppex avatar: Elixir logo with a chat bubble

Elixir CI Documentation Version Coverage Status

SMPP 3.4 protocol and framework implementation in Elixir.

See Examples for details.

Documentation

API documentation is available at hexdocs.pm/smppex.

Live Demo

There is a simple online demo MC (SMPP server) at smppex.rubybox.ru.

Related projects

A list of related projects can be found here.

Installation

To install and use the package:

  1. Add smppex to your list of dependencies in mix.exs:

    def deps do
      [{:smppex, "~> 3.0"}]
    end
  2. Ensure smppex is started before your application:

    def application do
      [applications: [:smppex]]
    end

License

This software is licensed under MIT License.

Credits

The picture for the project was made by Igor Garybaldi.

Sponsored by FunBox

smppex's People

Contributors

archseer avatar chemeris avatar desoulter avatar edescourtis avatar epsylonix avatar icedragon200 avatar igoradamenko avatar olgeni avatar pmenhart avatar sasa1977 avatar savonarola avatar sergey-chechaev avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

smppex's Issues

SMPPEX.MC doesn't send enquire_link automatically

Hello!

I am trying to setup Message Center using SMPPEX.MC.
And I have some problems with mc_opts. The documentation says that default behavior is sending enquire_link pdu every 30000 ms. But when I connect to my MessageCenter with testing client and display all sending pdu at handle_send_pdu_result (using standard Logger.info) and it doesn't catch anything.
Then I get inside the service SMPPEX.Session and put Logger here: https://github.com/savonarola/smppex/blob/da36a8e473309c16108b114a753e07dee9d8a180/lib/smppex/session.ex#L625

I see that it builds enquire_link pdu but doesn't send it.
Also, I tried to pass enquire_link_limit option explicitly and it didn't solve problem.
Can you tell what the problem? Where am I wrong?

Thanks)

Question: Wrong use of handle_socket_error?

Hi guys,
I have a session implementation that implements the handle_socket_error callback. Maybe it's because of my understanding, but when reading the API I assume that the callback is called exactly when a network error or similar occurs. I artificially triggered the error by cutting the internet connection. However, the callback was not called. The session runs into a timeout and is terminated. Am I getting something wrong?

smppex vs oserl

Just wondering - why a completely new library, what is the benefit comparing to existing libs like oserl?
We're looking for a good SMPP library in Erlang/Elixir, so I'm looking at available options.

how to use a SMPPEX.SimpleClient module

hi, i am using a SMPPEX.SimpleClient module to try to send a smpp message to SMPPSim simulator

the call to create function is sucessfully, this function return a pid and a ref

but how can i call send_pdu function?

the send_pdu function receive a session as parameter but how i obtain this session?

thank you.

Add support for elixir 1.7's handle_continue/2

https://michal.muskala.eu/2018/06/20/my-otp-21-highlights.html#new-callback-in-genserver-handle_continue2

handle_continue was added to OTP 21 primarily to support asynchronous setup without blocking (this way we can run code right after init, but make sure start_link doesn't timeout or error).

Would be extremely useful to be able to bind right after init.

At the moment we do something like

  def init(...) do
    ...
    Kernel.send(self(), :bind)
    {:ok, %{}}
  end

  def handle_info(:bind, st) do
    pdu = bind(....)
    {:noreply, [pdu], st}
  end

Which is definitely not ideal (I remember I had to hack around a bit because certain sends wouldn't allow sending to self at all).

With the new GenServer callbacks, this would look like:

  def init(...) do
    ...
    {:ok, %{}, {:continue, :bind}}
  end

  def handle_continue(:bind, st) do
    pdu = bind(....)
    {:noreply, [pdu], st}
  end

Unbind/Graceful shutdown

One more case we've ran into while stress testing!

SMPP supports an unbind operation (either SMSC or ESME are allowed to send it), signalling a "log off" of sorts. It's the correct way to signal session termination without relying on enquire_link timeouts.

Scenario: We're connecting to an upstream SMSC via our ESME implementation. The upstream SMSC has a connection limit (max 8 connections), and we're running 8 connections. Any connections after that receive an error.

We decide to do a new deploy. Old connections are terminated, but the SMSC won't detect them as dead until the next enquire_link (15-45s). If we try opening new connections now, we will receive errors, and the upstream will also be trying to send messages to those dead connections (until it sees the node died and schedule a retry).

The solution here is to send an unbind pdu (and since SMPPEX is async preferably wait for a resp). We tried to do that inside handle_stop:

@ esme.ex:44 @ defmodule ESME do

   def handle_resp(pdu, original_pdu, st) do
     case Pdu.command_name(pdu) do
+      :unbind_resp ->
+        send self(), :unbound
@ esme.ex:161 @ defmodule ESME do

   def handle_stop(st) do
+    send_unbind
+    receive do
+      :unbound -> nil
+    after
+      5_000 -> nil
+    end
   end

Problem with this is that the socket is closed just before handle_stop, and it's impossible to determine if the socket died, or if we're just gracefully shutting down.

Possible solution: For a graceful GenServer shutdown (we're using distillery + SIGTERM, it will call :init.stop for us), you'd normally do Process.flag(:trap_exit, true) inside init, which will make it call def terminate(reason, status).

Maybe making SMPPEX.ESME/MC implement terminate that calls our module's terminate would work fine? Hopefully that will be called before handle_stop terminates the socket, but I'm not sure if that will work with MC since it uses ranch connections.


I know I've piled up a few issues now, so if the solution above sounds fine, I can go ahead and try implementing it! I'm just not sure what the most semantic solution would be, so I figured I'd open up a discussion first

Why is poison a dependency?

I see no usage of Poison in the codebase, so why is it a dependency? Even if it's optional it has caused havoc in my codebase as of late with libraries that support both jason and poison since mix pulls it in regardless.

deliver_sm_resp sequence number

I have this piece of code

def handle_pdu(pdu, st) do
sequence_number = Pdu.sequence_number(pdu)
case pdu |> SMPPEX.Pdu.command_id |> SMPPEX.Protocol.CommandNames.name_by_id do
{:ok, :deliver_sm} ->
Logger.info("PDU received: #{inspect pdu}")
Logger.info("Ingresa a deliver_sm")
Logger.info("SequenceNUmberReceived: #{inspect sequence_number}")
#{:ok, command_id} = CommandNames.id_by_name(:deliver_sm_resp)
deliver_sm_resp = Pdu.new({0x80000005,0,sequence_number})
#deliver_sm_resp = Pdu.Factory.deliver_sm_resp(0)
Logger.info("deliver_sm_resp #{inspect deliver_sm_resp}")
SMPPEX.ESME.send_pdu(self(), deliver_sm_resp)
responder(pdu, sequence_number+1)
st
_ ->
st
end
end

The deliver_sm_resp is not using the sequence_number assigned, and must be equal to deliver_sm sequence number. In the attached file PDU numbers are 133 and 134.

Your help will be apreciated.
Best regards

tcpdump201703221551.zip

Be able to distinguish between deliberate stops and socket failures.

One of the main issues we have is that if the ESME's socket fails, we will shutdown as {:stop, :normal, state}, which looks the same as if we just regularly stopped the ESME. We need to be able to distinguish between a closed socket (network failure, restart the process), and the application trying to stop the ESME.

user_message_reference/tag values not decoding properly

This deliver_sm packet has a user_message_reference optional value.

 └─ λ xxd raw.bin
00000000: 0000 004a 0000 0005 0000 0000 2710 150e  ...J........'...
00000010: 0001 0131 3231 3233 3435 3637 3839 0001  ...12123456789..
00000020: 0131 3233 3435 3637 3839 3031 0000 0000  .12345678901....
00000030: 0000 0000 0300 0d4d 6f72 6520 6d65 7373  .......More mess
00000040: 6167 6573 0204 0002 004d                 ages.....M

But when decoding, it's returned as 516 (0x0204 = user_message_reference) in the optional map, instead of decoded properly.

iex(13)> v = File.read!("raw.bin")
<<0, 0, 0, 74, 0, 0, 0, 5, 0, 0, 0, 0, 39, 16, 21, 14, 0, 1, 1, 49, 50, 49, 50,
  51, 52, 53, 54, 55, 56, 57, 0, 1, 1, 49, 50, 51, 52, 53, 54, 55, 56, 57, 48,
  49, 0, 0, 0, 0, 0, 0, ...>>
iex(14)> SMPPEX.Protocol.parse(v)
{:ok,
 {:pdu,
  %SMPPEX.Pdu{command_id: 5, command_status: 0,
   mandatory: %{data_coding: 3, dest_addr_npi: 1, dest_addr_ton: 1,
     destination_addr: "12345678901", esm_class: 0, priority_flag: 0,
     protocol_id: 0, registered_delivery: 0, replace_if_present_flag: 0,
     schedule_delivery_time: "", service_type: "",
     short_message: "More messages", sm_default_msg_id: 0, sm_length: 13,
     source_addr: "12123456789", source_addr_npi: 1, source_addr_ton: 1,
     validity_period: ""}, optional: %{516 => 77},
   ref: #Reference<0.0.262145.195893>, sequence_number: 655365390}}, ""}

Meanwhile, Wireshark does correctly parse it:

screen shot 2017-04-25 at 10 52 47

Get report from short_message

Some operator return status of delivered SMS in a short message. Is there a method in SMPPEX Library to invoke delivery information from short_message: date, stat and err e.t.c?

Example:

%SMPPEX.Pdu{
  command_id: 5,
  command_status: 0,
  mandatory: %{
    data_coding: 0,
    dest_addr_npi: 0,
    dest_addr_ton: 5,
    destination_addr: "TEST",
    esm_class: 4,
    priority_flag: 0,
    protocol_id: 0,
    registered_delivery: 0,
    replace_if_present_flag: 0,
    schedule_delivery_time: "",
    service_type: "",
    short_message: "id:rdwjwxns18krxr9936ey96ymcw sub:000 dlvrd:000 submit date:180711070003912+ done date:180711070000012+ stat:UNDELIV err:000",
    sm_default_msg_id: 0,
    sm_length: 124,
    source_addr: "79222222222",
    source_addr_npi: 1,
    source_addr_ton: 1,
    validity_period: ""
  },
  optional: %{},
  ref: #Reference<0.655675225.434896897.260194>,
  sequence_number: 4
}

`deliver_sm_resp` Factory incomplete

deliver_sm_resp message should contain a sequence_id of a deliver_sm pdu. Something like this would be helpful:

Pdu.Factory.deliver_sm_resp(result, Pdu.sequence_number(deliver_sm_pdu))

Also deliver_sm_resp requires message_id to be a null byte but setting that field causes the payload to be malformed.

I've tried both:

echo = Pdu.Factory.deliver_sm_resp(0)
            |> Map.put(:sequence_number, Pdu.sequence_number(pdu))
            |> Pdu.set_mandatory_field(:message_id, nil)

and:

echo = Pdu.Factory.deliver_sm_resp(0)
            |> Map.put(:sequence_number, Pdu.sequence_number(pdu))
            |> Pdu.set_mandatory_field(:message_id, <<0x00>>)

Neither seem to work.

support starting MC in the client app supervision tree

Currently MC can be started via SMPPEX.MC.start, which starts the processes in the ranch app supervision tree. It would be nice if as a client developer I could start the MC somewhere in my own's app supervision tree. Luckily, ranch supports this via a so called embedded mode. The :ranch.child_spec/5 function can be used to generate a supervisor-compliant childspec.

My proposal is to add SMPPEX.MC.child_spec/1 which would take a keyword list of opts and delegate to :ranch.child_spec. That way the client developer could provide something like {SMPPEX.MC, session: {MySession, session_arg}, ...} as a supervisor child. This basically follows the Plug.Cowboy API.

If you agree, I'll be happy to submit a PR.

delivery report generator

I think it make sense to add a generator for delivery reports into the library, because it's a common task and is not going to differ much from an app to an app. Something like this:

    def source_tuple(pdu) do
      Enum.map([:source_addr, :source_addr_ton, :source_addr_npi], fn x -> Pdu.field(pdu, x) end)
      |> List.to_tuple
    end

    def dest_tuple(pdu) do
      Enum.map([:destination_addr, :dest_addr_ton, :dest_addr_npi], fn x -> Pdu.field(pdu, x) end)
      |> List.to_tuple
    end

    @message_state_delivered 2
    def gen_delivery_report(pdu, resp_pdu, message \\ "", message_state \\ @message_state_delivered) do
      source = source_tuple(pdu)
      dest = dest_tuple(pdu)
      message_id = Pdu.field(resp_pdu, :message_id)
      SMPPEX.Pdu.Factory.delivery_report(message_id, source, dest, message, message_state)
    end

Not sure where to add this, though. So sending as an issue and not as a pull request.

Q: Any recommendations on how to process submit_sm asynchronously?

Hi,

For app I'm working on, due to business requirements, the :submit_sm processing needs to perform some external services access which takes time. Therefore, we decided to try and handle those :submit_sm's asynchronously. Currently it goes something like this:

defmodule OurSession do
  use SMPPEX.Session
  
  def handle_pdu(original_pdu, state) do
    case SMPPEX.Pdu.command_name(original_pdu) do
      :submit_sm ->
        do_handle_submit_sm_async(original_pdu, state)
        {:ok, [], state}
    end
  enddefp do_handle_submit_sm_async(original_pdu, state) do
    Task.start(fn ->
      # time intensive                                
      data = fetch_data_from_external_service(original_pdu, state)# use the external data to build response and new state
      response_pdu = build_response_pdu(original_pdu, data)
      new_state = build_new_state(original_pdu, state, data)# set new state and reply to `:submit_sm`
      :ok = SMPPEX.Session.call(state.session_pid, {:set_state, new_state})
      :ok = SMPPEX.Session.send_pdu(state.session_pid, response_pdu)
    end)
  end
end

Unfortunately, due to GenServer calls in Session.call and Session.send_pdu, we must use this trick with Task.start. I tried investigating various approached, including using Session.cast to perform this, but none worked.

Do you have any suggestions on how to achieve asynchronous :submit_sm processing, but in a more elegant manner?

Thank you for your time!

Support semantic versioning of dependencies

./deps/smppex/mix.exs:43: {:ranch, ">= 1.3.0 and < 1.6.0"}
Can we just say we support 1.3+? I have a hard requirement on 1.7+ because of a new version of phoenix and this causes dependency conflicts.

submit_sm sequence number incorrect

Looks like the sequence numbers are incorrect as soon as we send a submit_sm. We ran into this in production (sorry for not sending a wireshark trace we have privacy requirements) when a submit_sm was sent very close to a enquire_link message we received the enquire_link_resp in handle_resp/3 of our SMPPEX.Session implementation (which is unexpected).

From: ESME (SMPPEx) To: Remote SMSC (foreign implementation)
Short Message Peer to Peer, Command: Enquire_link, Seq: 16, Len: 16
Length: 16
Operation: Enquire_link (0x00000015)
Sequence #: 16

From: Remote SMSC (foreign implementation) To: ESME (SMPPEx)
Short Message Peer to Peer, Command: Enquire_link - resp, Status: "Ok", Seq: 16, Len: 16
Length: 16
Operation: Enquire_link - resp (0x80000015)
Result: Ok (0x00000000)
Sequence #: 16

From: ESME (SMPPEx) To: Remote SMSC (foreign implementation)
Short Message Peer to Peer, Command: Submit_sm, Seq: 18, Len: 140
Length: 140
Operation: Submit_sm (0x00000004)
Sequence #: 18
Service type: (Default)
Type of number (originator): International (0x01)
Numbering plan indicator (originator): ISDN (E163/E164) (0x01)
Originator address: xxxxxxxxxxx
Type of number (recipient): International (0x01)
Numbering plan indicator (recipient): ISDN (E163/E164) (0x01)
Recipient address: xxxxxxxxxxx
.... ..00 = Messaging mode: Default SMSC mode (0x0)
..00 00.. = Message type: Default message type (0x0)
00.. .... = GSM features: No specific features selected (0x0)
Protocol id.: 0x00
Priority level: GSM: None ANSI-136: Bulk IS-95: Normal (0x00)
Scheduled delivery time: Immediate delivery
Validity period: SMSC default validity period
.... ..00 = Delivery receipt: No SMSC delivery receipt requested (0x0)
.... 00.. = Message type: No recipient SME acknowledgement requested (0x0)
...0 .... = Intermediate notif: No intermediate notification requested (0x0)
.... ...0 = Replace: Don't replace (0x0)
Data coding: 0x00
Predefined message: 0
Message length: 85
Message: ...

From: Remote SMSC (foreign implementation) To: ESME (SMPPEx)
Short Message Peer to Peer, Command: Submit_sm - resp, Status: "Ok", Seq: 18, Len: 25
Length: 25
Operation: Submit_sm - resp (0x80000004)
Result: Ok (0x00000000)
Sequence #: 18
Message id.: XXXXXXX

From: ESME (SMPPEx) To: Remote SMSC (foreign implementation)
Short Message Peer to Peer, Command: Enquire_link, Seq: 18, Len: 16
Length: 16
Operation: Enquire_link (0x00000015)
Sequence #: 18

From: Remote SMSC (foreign implementation) To: ESME (SMPPEx)
Short Message Peer to Peer, Command: Enquire_link - resp, Status: "Ok", Seq: 18, Len: 16
Length: 16
Operation: Enquire_link - resp (0x80000015)
Result: Ok (0x00000000)
Sequence #: 18

timer issue

For some reason, I cannot have two instances of SMPPEX.Session running.

16:19:58.456 [info]  ESME stopped with reason {:timers, :session_init_timer}

16:19:58.457 [error] GenServer #PID<0.15928.1> terminating
** (stop) {:timers, :session_init_timer}
Last message: {:check_timers, -576459768537}
State: %SMPPEX.TransportSession{buffer: "", module: SMPPEX.Session, module_state: %SMPPEX.Session{auto_pdu_handler: %SMPPEX.Session.AutoPduHandler{by_ref: #Reference<0.3465077020.1560674305.128573>, by_sequence_number: #Reference<0.3465077020.1560674305.128572>}, module: SmppTest.SmppEsme, module_state: %{bound: true, buffer_size: 500000, host: "smppex.rubybox.ru", outbox_size: 0, password: "password", pobox_pid: #PID<0.15929.1>, port: 2775, system_id: "smppclient1", waiting_for_more: true, window: 5000}, pdus: %SMPPEX.PduStorage{by_sequence_number: #Reference<0.3465077020.1560674305.128571>}, response_limit: 60000, sequence_number: 1, tick_timer_ref: #Reference<0.3465077020.1560543233.131401>, time: -576459768537, timer_resolution: 100, timers: %SMPPEX.SMPPTimers{connection_time: -576459778637, enquire_link_limit: 30000, enquire_link_resp_limit: 30000, enquire_link_state: :active, inactivity_limit: :infinity, last_peer_action_time: -576459778637, last_transaction_time: 0, session_init_limit: 10000, session_init_state: :established}}, ref: #Reference<0.3465077020.1560543233.128554>, socket: #Port<0.10600>, transport: :ranch_tcp}

I am not sure what I could be doing wrong. Any insights?

Set default_call_timeout to SMPPEX.Session.Defaults or config

Hi, than you for your work!

I got a timeout error on production when invoke Session.send_pdu/2. This happened when TransportSession.call(pid, {:send_pdu, pdu}) no reply is received within 5000 milliseconds. TransportSession has module attribute @timeout but we can't pass timeout through Session.send_pdu/2 and we can't change @default_call_timeout. I would like to solve this issue if you don't mind. I have two ways:

  1. set default_call_timeout through SMPPEX.Session.Defaults module and add attribute timeout for Session.send_pdu/3
  2. set default_call_timeout in config/confix.ex file and add attribute timeout for Session.send_pdu/3.

How can i send sms

I don't get it how can I send SMS by smppex?

For example in https://github.com/VoyagerInnovations/esmpp lib i can send sms by
Ids = esmpp:send_sms(C, <<"12345">>, <<"639473371390">>, <<"Hello">>).

Where 12345 number from and 639473371390 mobile number to.

I do:

{:ok, esme} = SMPPEX.ESME.Sync.start_link("122.222.122.122", 12000)
bind = SMPPEX.Pdu.Factory.bind_transmitter("test", "test")
{:ok, _bind_resp} = SMPPEX.ESME.Sync.request(esme, bind)
submit_sm = SMPPEX.Pdu.Factory.submit_sm({"EDTEST", 1, 1},{"79685555555", 1, 1}, "hello!")
{:ok, submit_sm_resp} = SMPPEX.ESME.Sync.request(esme, submit_sm)

But sms does not come

Trigger handle_resp_timeout for in-flight messages on socket close.

Back again, sorry :)

The scenario: SMSC/ESME sends a message to the other side. Seconds later, the connection gets severed and the socket closes. The :response_limit timeout is not reached yet, but the socket close terminates the connection process, and we lose the unacknowledged messages that were in flight.

Solution: Not sure what would be best here, maybe trigger handle_resp_timeout for all messages?

ESME doesn't work with SSL/TLS

Generate a server key and cert:

openssl genrsa 1024 > host.key
openssl req -new -x509 -nodes -sha1 -days 365 -key host.key -out host.crt

The SMSC starts fine:

{:ok, mc_server} = SMPPEX.MC.start({__MODULE__, %{}}, [
 transport_opts: [
   port:  8443,
   certfile: '/priv/host.crt',
   keyfile: '/priv/host.key'
 ],
 transport: :ranch_ssl
])

But then ESME opens the conenction but it just does nothing:

host = "localhost"
port = 8443
SMPPEX.ESME.start_link(host, port, {__MODULE__, %{}}, [transport: :ranch_ssl])

And the SMSC disconnects the ESME for not starting the session in time:

iex(5)> time=13:56:12.569 level=info       Session #PID<0.740.1>, being stopped by timers(session_init_timer)
time=13:56:12.569 level=error  GenServer #PID<0.740.1> terminating
** (stop) {:timers, :session_init_timer}
Last message: {:check_timers, -576460635659}

Usage of functions that require session/pid

I find usage of functions such as send_pdu which takes a session/process id quite difficult; I passed the name of the module implementing the callbacks expecting it to work just as a GenServer. I was expecting the module name to be used for the process registration.

SMPPEX.ESME.start_link(ip, port, {MyModule, []})
SMPPEX.Session.send_pdu(__MODULE__, ......)

However it doesn't work, I need to explicitly pass the pid obtained from the start_link to get this function to work or I have to rely on the various callbacks to send pdus.

Am I missing something?

TLV fields

Hello.

How I can set optional TLV field 0x3004 value 2 type integer uint8 for PDU – it means service SMS?

log noise due to timeouts

A timeout generates some log noise, because the transport session GenServer terminates with an abnormal reason. For example:

17:49:34.843 [info]  Session #PID<0.330.0>, being stopped by timers(session_init_timer)
 
17:49:34.856 [info]  Session #PID<0.330.0> stopped with reason: {:timers, :session_init_timer}, lost_pdus: []
 
17:49:34.863 [error] GenServer #PID<0.330.0> terminating
** (stop) {:timers, :session_init_timer}
Last message: {:check_timers, -576460738398}
State: %SMPPEX.TransportSession{buffer: "", mode: :mc, module: SMPPEX.Session, module_opts: [{SmppServer.Session, nil}, []], module_state: %SMPPEX.Session{auto_pdu_handler: %SMPPEX.Session.AutoPduHandler{by_ref: #Reference<0.451547869.4223270917.96940>, by_sequence_number: #Reference<0.451547869.4223270917.96939>}, module: SmppServer.Session, module_state: nil, pdus: %SMPPEX.PduStorage{by_sequence_number: #Reference<0.451547869.4223270917.96936>}, response_limit: 60000, sequence_number: 0, tick_timer_ref: #Reference<0.451547869.4223139845.97337>, time: -576460738398, timer_resolution: 100, timers: %SMPPEX.SMPPTimers{connection_time: -576460748497, enquire_link_limit: 30000, enquire_link_resp_limit: 30000, enquire_link_state: :active, inactivity_limit: :infinity, last_peer_action_time: 0, last_transaction_time: 0, session_init_limit: 10000, session_init_state: :established}}, ref: #Reference<0.451547869.4223139842.97123>, socket: #Port<0.7>, transport: :ranch_tcp}
 
17:49:34.881 [error] Ranch listener #Reference<0.451547869.4223139842.97123> had connection process started with SMPPEX.TransportSession:start_link/3 at #PID<0.330.0> exit with reason: {:timers, :session_init_timer}

I'd like to be able to suppress this noise. I did a quick experiment, and returning {:stop, :normal, [], st} here seems to do the job. I'm not super familiar with the codebase, so I might have missed something though.

To avoid silently breaking the existing behaviour, we could introduce the new callback called e.g. handle_timeout. This callback would receive the timeout reason, and it has to return the exit reason. The default implementation would generated by the __using__ macro would return {:timers, reason}. I'd also move this logger invocation to that function. So now, the stop clause in check timer boils down to: {:stop, reason} -> {:stop, st.module.handle_timeout(reason), st.module_state), [], st}.

Alternatively, we could even allow the client to decide if they want to stop the server or resume. Not sure if that makes sense though.

Let me know what you think. I'm also open to other ideas, but in any case I'd like to have the ability to remove these entries from the log. I'm of course willing to submitting a PR once we agree on an approach.

ETS ** Too many db tables ** error

Hi man, I've recently run into this problem on my production environment:

CRASH REPORT==== 7-Nov-2016::09:00:05 ===
  crasher:
    initial call: ranch_conns_sup:init/7
    pid: <0.8347.1>
    registered_name: []
    exception exit: {system_limit,
                        [{ets,new,[pdu_storage_by_sequence_number,[set]],[]},
                         {'Elixir.SMPPEX.PduStorage',init,1,
                             [{file,"lib/smppex/pdu_storage.ex"},{line,43}]},
                         {gen_server,init_it,6,
                             [{file,"gen_server.erl"},{line,328}]},
                         {proc_lib,init_p_do_apply,3,
                             [{file,"proc_lib.erl"},{line,247}]}]}
      in function  ranch_conns_sup:terminate/3 (src/ranch_conns_sup.erl, line 224)
    ancestors: [<0.8346.1>,<0.8345.1>]
    messages: []
    links: []
    dictionary: [{<0.8348.1>,true}]
    trap_exit: true
    status: running
    heap_size: 610
    stack_size: 27
    reductions: 261
  neighbours: 
.....
[error] * Too many db tables

It seems like something to do with the pdu_storage, is there any potential misconfiguration in the the SMPPEX code?

Thanks,
Long

ESME: session getting messed up after GenServer.call timeout

Sadly I can't provide many details for this as we ran out of disk space for logs this weekend and I'm missing the logs from when this started, but:

We have a pool of ESMEs through which we send messages out. For some reason, after about 2-3 days of running without restarts, we started getting failures on most (maybe all?) of the processes. The weird thing is, each process started to get enquire_link_timer failures, but not just once, but hundreds of times over the next ~18 seconds, after which it seems to finally get killed (and we restart it):

# basically this, spanning 500 lines
time=20:15:19.904 level=info esme #PID<0.8869.0>, being stopped by timers(enquire_link_timer)
# ...
time=20:15:32.876 level=info	esme #PID<0.8869.0>, being stopped by timers(enquire_link_timer)
time=20:15:32.913 level=info	service=esme_consumer action=route to=Provider pid=#PID<0.8869.0> uuid=11111-2222-333-4444-555555
time=20:15:32.987 level=info	esme #PID<0.8869.0>, being stopped by timers(enquire_link_timer)
time=20:15:33.078 level=info	esme #PID<0.8869.0>, being stopped by timers(enquire_link_timer)
time=20:15:33.179 level=info esme #PID<0.8869.0>, being stopped by timers(enquire_link_timer)
# ...
time=20:15:37.224 level=info	esme #PID<0.8869.0>, being stopped by timers(enquire_link_timer)
time=20:15:37.320 level=info	esme #PID<0.8869.0>, being stopped by timers(enquire_link_timer)
time=20:15:37.421 level=info	esme #PID<0.8869.0>, being stopped by timers(enquire_link_timer)
time=20:15:37.522 level=info	esme #PID<0.8869.0>, being stopped by timers(enquire_link_timer)
time=20:15:37.623 level=info	esme #PID<0.8869.0>, being stopped by timers(enquire_link_timer)
time=20:15:37.724 level=info	esme #PID<0.8869.0>, being stopped by timers(enquire_link_timer)
time=20:15:37.815 level=error GenServer #PID<0.8869.0> terminating

However, this doesn't seem like a normal enquire_link timeout, because anything else that was using timers (st.time) would also go crazy. Any pdu message we would try to send would immediately (~5-100ms later) time out (handle_resp_timeout), even though we had response_limit set to 20_000.

SMPPEX.ESME.start_link(host, port, {__MODULE__, opts}, [esme_opts: [enquire_link_limit: 45_000, response_limit: 20_000]])

time=20:15:22.392 level=info	service=esme_consumer action=route to=Provider pid=#PID<0.8936.0> uuid=11111-2222-333-4444-555555
time=20:15:22.403 level=error	bound=true bind_type=transceiver carrier=Provider service=esme op=submit_sm_resp status=error uuid=11111-2222-333-4444-555555 reason=timeout

We do trap exits inside the ESME, but since it's a GenServer, I think all it does is it ensures that subprocess exits are caught, and if the process gets an exit signal, terminate/2 gets called for us for graceful shutdown.

It so far only happened once, but when it did, it seemed to continue happening continually, even after processes restarted. We restarted the entire node and it got back to normal. Everything seems to stem from that st.time which is updated every tick. Highly unlikely, but could this be a monotonic timer bug? It's possible it could be a bug in our application code too, but that doesn't explain why the timeout would be happening so fast (~5-100ms instead of 20s).

We're currently looking into this urgently because the timeouts start happening so fast, before the actual response has time to return, so we've been delivering duplicate messages

submit_sm/deliver_sm data_coding is mandatory, but the factory never includes it

iex(2)> SMPPEX.Pdu.Factory.submit_sm({"123", 1, 1}, {"456", 1, 1}, "Hi!")
%SMPPEX.Pdu{command_id: 4, command_status: 0,
 mandatory: %{dest_addr_npi: 1, dest_addr_ton: 1, destination_addr: "456",
   registered_delivery: 0, short_message: "Hi!", source_addr: "123",
   source_addr_npi: 1, source_addr_ton: 1}, optional: %{},
 ref: #Reference<0.0.5.466>, sequence_number: 0}

I've gotten around it by doing:

defp set_mandatory_field(%Pdu{} = pdu, field, val) do
  %{pdu | mandatory: pdu.mandatory |> Map.put(field, val)}
end

pdu |> set_mandatory_field(:data_coding, 3)

If we extend the submit_sm/deliver_sm factories though, we break backwards compatibility -- the spec states "There is no default setting for the data_coding parameter." so we'd have to include it as an extra argument with no default value (function arity would change)

Sending multipart message

I've been scratching my head trying to figure out how to send multipart message. I've tried using the below, however an error is returned from the SMSC.

pdu = SMPPEX.Pdu.new(4, %{
      sm_length: 0, short_message: "", destination_addr: "xxxxxxxxx"
    }, %{message_payload => "Message with length over 300"})

returns the below

 %SMPPEX.Pdu{command_id: 2147483652, command_status: 4, mandatory: %{message_id: "0000000000"}, optional: %{}, ref: #Reference<0.1222358344.4153409537.176228>, sequence_number: 3}

I'm not well versed in the usage of TLV and attempt to gain insight from other documents have proven futile.
Kindly assist, thank you.

Handling of special characters (hyphen, underscore) - source_addr_ton

Hi, I run into an issue where SMSC was returning status code 72 indicating that the source address ton was invalid, in this case it was being set to 0.
I realized that because the source address contains a hyphen e.g Test-SM, the TONNPIDefaults was setting the address type/npi to 0 since the regex checks only a number and alphanumeric.
Changing the alphanumeric class to graph solves the issue. I'm not convince it's the right solution since it opens up the possibility of passing special characters as the source address. I've seen hyphen and an underscored being used mostly in the source address, the other solution was to check if the ton and npi was set to 0, then for the presence of an underscore or hyphen, set the ton to 5/alphanumeric.

What do you think, is there a better approach?

Receiving multipart SMS

This is not an issue, more of a feature request. I've been received multipart sms from the SMSC, currently I store the various parts in a ets table using using a combination of source_addr, destination_addr and part of the multipart as a key with the actual message and sequence stored in a MapSet, I was wondering if there's an existing feature for handling/assembling incoming multipart message. Below is my current implementation for handling multipart using ets.

def handle_cast(
        {:pdu, %{mandatory: %{destination_addr: short_code, source_addr: msisdn, esm_class: 64}} = pdu, pid},
        state
      ) do
    {:ok, {ref, count, seq}, msg} = SMPPEX.Pdu.Multipart.extract_from_pdu(pdu)
    key = "#{msisdn}_#{short_code}_#{ref}_#{count}" # Can use erlang:phash2 for a short key

    case lookup(key) do
      {:error, :not_found} ->
        :ets.insert(__MODULE__, {key, MapSet.new([{seq, msg}])})

      {:ok, [{_, result}]} ->
        process_message(result, key, count, seq, msg)
      
       
        end
    end

    {:noreply, state}
  end

defp process_message(result, key, count, seq, msg) do
    unless MapSet.member?(result, {seq, msg}) do
      result = MapSet.put(result, {seq, msg})

      if Enum.count(result) == count do
        merge_message(result)
      else
        update_element(key, {2, result})
      end
    else
      :ignore
    end
  end

  defp merge_message(result) do
    msg =
      result
      |> Enum.sort()
      |> Enum.map(fn {_, ms} -> ms end)
      |> Enum.join()

    {:sms, msg}
  end

 defp update_element(key, value) do
    :ets.update_element(__MODULE__, key, value)
  end

defp lookup(key) do
    case :ets.lookup(@table_name, key) do
      [_ | _] = result -> {:ok, result}
      _ -> {:error, :not_found}
    end
  end

submit_sm_resp with non-zero status and body crashes connection

Hello,

We are receiving the following submit_sm_resp PDU when sending a message fails (hex dump):

00 00 00 21 - 80 00 00 04 - 00 00 00 0b -
00 00 00 02 - 30 41 30 30 - 30 30 30 30 -
41 33 44 33 - 32 33 41 31 - 00

Causing the following crash in the SMPP connection:

esme #PID<0.662.0>, unknown pdu: %SMPPEX.RawPdu{body: <<48, 65, 48, 48, 48, 48, 48, 48, 65, 51, 68, 51, 50, 51, 65, 49, 0>>, command_id: 2147483652, command_status: 11, sequence_number: 2}("Unexpected end of data"), stopping

What are your thoughts?

Thanks in advance.

EDIT: The body seems to contain the message id in an ASCIIZ string.

Q: Why are the body paramters in submit_multi_resp mandatory?

Hey guys,

according to the smpp 3.4 spec. the submit_multi_resp should contain unsuccess_smes as list of unsuccessful submissions. But only for one or number of unsuccess . So it's not clear enough for me to see if that parameter is strictly mandatory. So the Smpp Server need's to respond always with no_unsuccess and unsuccess_smes even if all submissions were successful.

Why is the IF Version not sended?

Hi @savonarola ,
i know that smppex only supports smpp 3.4. In e.g transmitter_bind the version isn't set by smppex.

image

That's a problem because i need to talk with another smpp server which checks the version. According to the docs there is no possibility to set the version manually.

External sequence number generation and pdu storage

Have been running with fork https://github.com/MarkMagnus/smppex for the last 3 years.

This fork has the following modifications

  1. External sequence number generation.
  2. Separated processes for PDU storage.

Our system sends SMS to multiple suppliers, with multiple connections, for multiple clients simultaneously. Messages are tracked by predetermined sequence numbers, generated for each supplier independently. We are now considering, either re-basing the code by either forking and re-implementing changes or doing the work then submitting a pull request.

Would like to know your thoughts? and any suggestions on how this could be improved.

v2 drops handle_parse_error_support.

As specified in https://github.com/savonarola/smppex/pull/30, I'd prefer to be able to override the default behavior and handle parse errors by myself (dropping the pdu) instead of automatically stopping my ESME/SMSC.

Our current behavior is we ACK the pdu based on the header, drop it and just keep on going. If we just immediately cut the session, the carrier will just retry the same pdu (and then we stop again -- it's continually killing our session every minute or two).

In v1, we did:

 def handle_parse_error({:unparsed_pdu, raw_pdu, reason}, st) do
    Logger.warn(state: :parse_error, reason: inspect(reason), pdu: inspect(raw_pdu, limit: :infinity))

    command_id = 0x80000000 + raw_pdu.command_id
    code = Errors.code_by_name(:RINVTLVLEN) # TODO: expand to handle more flexibly
    resp = Pdu.new({command_id, code, 0})
    SMPPEX.ESME.reply(self(), raw_pdu, resp)
    {:ok, st}
end

But v2 drops support for that

Generic nack - recognized as pdu - not as pdu response

https://github.com/savonarola/smppex/blob/1c8dbd9673291431b2d329a2cb20134c91857af2/lib/smppex/pdu.ex#L375

Hi - I would like to ask if generic_nack is on purpose recognized as PDU - not response PDU?

Usually, I receive it as a response to malformed PDU (wrong length, unsupported command id) and currently when I receive generic nack as a response it arrives as incoming PDU - not PDU resp and PDU times out. Would it be possible to handle it as a response - or there is reasoning why generic nack is not recognized as a response?

KR

Receive SMS

I would like to bind and receive SMS from a telecom SMSC. I've been going through the docs and it seems there are no docs for the function I suppose will allow me to do this, can educate me a bit on this aspect of your awesome library. Below is my code.

defmodule Bundle.Renewal do
use SMPPEX.Session
require Logger

def start_link() do
    SMPPEX.ESME.start_link(Application.get_env(:bundle, :smsc), Application.get_env(:bundle, :smsc_port), {__MODULE__, []})
end

def init(_, _, _) do
    SMPPEX.Pdu.Factory.bind_receiver("123", "123")
    {:ok, nil}
end

def handle_pdu(pdu, state) do
    Logger.info("#{inspect pdu}")
    {:ok, state}
end

end

  • Below is what I see in the logs.
    info] Session #PID<0.545.0>, being stopped by timers(session_init_timer)
    [info] Session #PID<0.545.0> stopped with reason: {:timers, :session_init_timer}, lost_pdus: []
    [error] GenServer #PID<0.545.0> terminating
    ** (stop) {:timers, :session_init_timer}
    Last message: {:check_timers, -576460219991}
    State: %SMPPEX.TransportSession{buffer: "", module: SMPPEX.Session, module_state: %SMPPEX.Session{auto_pdu_handler: %SMPPEX.Session.AutoPduHandler{by_ref: #Reference<0.3437791628.3933863937.104761>, by_sequence_number: #Reference<0.3437791628.3933863937.104760>}, module: Bundle.Renewal, module_state: nil, pdus: %SMPPEX.PduStorage{by_sequence_number: #Reference<0.3437791628.3933863937.104759>}, response_limit: 60000, sequence_number: 0, tick_timer_ref: #Reference<0.3437791628.3933732865.105160>, time: -576460219991, timer_resolution: 100, timers: %SMPPEX.SMPPTimers{connection_time: -576460230091, enquire_link_limit: 30000, enquire_link_resp_limit: 30000, enquire_link_state: :active, inactivity_limit: :infinity, last_peer_action_time: 0, last_transaction_time: 0, session_init_limit: 10000, session_init_state: :established}}, ref: #Reference<0.3437791628.3933732865.104756>, socket: #Port<0.68>, transport: :ranch_tcp}

Thanks a lot for your support.

Don't use GenServer labels in TransportSession

I am trying to add process instances of TransportSession to Swarm this library assumes that I can implement handle_call() with the :'$gen_call' label for a message it sends me. It seems to me that you should be using :gen.call with your own label inside the TransportSession module instead of using the GenServer label. This would avoid compatibility issues with other libraries. The user supplied module would then handle all :'$gen_call' and :'$gen_cast' requests.

Track pdu/pdu_resp with external ids

Hi! In our usecase, we construct a pdu, we store it in the db, and we send it off (submit_sm). Once we get a submit_sm_resp, we want to update the db to set it's state to "sent" and store the message_id returned in submit_sm_resp. While handle_resp(pdu, original_pdu, st) gives us the pdu + original_pdu, it's impossible to tell what pdu in the database it is, since we can't tag the pdu with the database id (and the Refs returned by make_ref() can't be properly serialized into DB).

What would be the best way to do that? I've considered tagging the pdus with an optional field TLV, ":internal_id", so that it'd stay on the original pdu, but that would be extra bytes getting sent off (as well as leak our internal ids to the other side). Another option I've considered is forking the code and modifying the PduStorage to instead store a "seq_number -> uuid" pair, then handle_resp would return the uuid as the second variable instead of the original_pdu, however that doesn't seem too attractive as I'd need to maintain a fork...

Process name for SMPPEX.ESME.start_link

Can we add process name to SMPPEX.ESME.start_link like in GenServer:

# Start the server and register it locally with name MyStack
{:ok, _} = GenServer.start_link(Stack, [:hello], name: MyStack)

# Now messages can be sent directly to MyStack
GenServer.call(MyStack, :pop)
#=> :hello

Now if we need to add a process name for SMPPEX.Session we add Process.register to callback init. It is not very convenient.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.