Giter Club home page Giter Club logo

trunk-server's Introduction

Trunk Server

This is the software behind OpenMhz. For a while I had thoughts that I should see if I could make a business out of OpenMHz. I have come to realize though that I don't really want to run a business, I like building things a lot more.

In that vein, let's building some awesome things. Help me make OpenMHz better. Take the code and build a scanning site for your community. Add those features you have always been looking for. We can do better together!

  • Luke

If you are using this project as part of a business, please become a sponsor. A lot of time and effort went into building this

Notes

This code is pretty poorly commented, I am going to work on that. I am also going to use the Wiki to document the project. Please add to the documentation as you learn things.

There are a bunch of experiments lurking in the code. There is some code for adding in Stripe payments. I was going to role out the concept of paid accounts with additional features. There is also a half completed effort to allow more than one user to be associated with a system.

I haven't done a great job of keeping all the packages up to date... and I never got around to adding tests. Both of these would be great things for folks to go after.

PROD vs TEST Env

The Prod environment expects you to be using HTTPS for your domains. It is pretty easy to use Let's Encrypt to grab Certs for the domains you are using. If not, just run things using the docker-test.sh script.

Path Forward

It would be great to get the code to a place where there is base code and people can add customization on top of that. They would probably fork the base code, add additional features and design and then rebaseline the code as new things are added to the mainline.

Architecture

There are a lot of different components that make up the system. All of the server code is written in NodeJS and the frontend code uses React. Each of the different system components is run as a seperate container. A docker-compose script is used to start everything up. Right now, it is being operated on a single machine. It wouldn't be too hard to split it over a couple machines using Kubernetes. Semantic UI React is being used to create all of the UI components.

  • account: this frontend / server handles user account creation and profiles. When a user logins, they are re-directed to this app.
  • admin: a logged-in user uses this frontend / server to manage their systems.
  • backend: this just a server with no web frontend. It provides the API for uploading, filtering and fetching calls. It should also handle all of the metdata around a call.
  • frontend: the frontend / server that general public use to look through systems and listen to calls.
  • mongo: all of the metadata around calls is stored here, along with the user and system information.
  • nginx: proxies all of the calls to the correct server and handles the HTTPS certs.

Easy Install

NOTE: This may no longers be the easy method... I haven't updated the Ansible scripts in a while. Use at your own peril!! PRs wanted if you fix them up. I put together an Ansible Script to help make it easier to setup an OpenMHz server. You still need to get yourself a Droplet from Digital Ocean, a Domain Name and some storage. The scripts helps download and build everything.

Operations

You can run things in 2 different modes, test and prod. The big difference is that prod expects all of the calls to be https and gets cranky when they are not. I have gotten test to run fine on my laptop, so that is probably a good starting point.

DNS Entries

You should have a domain name pointing to the IP address of the server you are going to use. CNAMEs also need to be created for the various services. Create the CNAMEs below with your DNS host:

  • api
  • account
  • admin
  • www

After doing this, you should have the following domains: api.domain.com, account.domain.com, admin.domain.com, www.domain.com

S3 Storage

Currently, both test and prod expect to use S3-based storage instead of local storage. Switching to use local storage would be relatively easy - but for the sake of testing, let's just say use something S3-compatible. Make sure that ~/.aws/credentials has the credentials you'd like to use with your S3-compatible storage provider.. IE:

[default]
aws_access_key_id = [..]
aws_secret_access_key = [..]

Automatically renewing SSL Certificates

SSL certs are automatically fetched from Let's Encrypt using the CertBot tool. The approach taken is based on this Medium post and accompanying GitHub repo.

You do need to jump start the process and do an initial fetch. To get started, make sure you have your prod.env file filled out. If you don't, copy prod.env.example to prod.env and fill in the details. Make sure DOMAIN_NAME and REACT_APP_ADMIN_EMAIL are correct and are the version you want to use in production. Those values will be used when requesting an SSL cert from Let's Encrypt.

This script uses Let's Encrypt's Certbot tool.

In the main directory of the Trunk Server repo, run the following commands:

source prod.env
docker compose -f certbot-compose.yml up

Check the output from CertBot - and when it is done, just hit ctrl + c to exit.

And then run the following to make sure everything has stopped:

docker compose -f certbot-compose.yml down

Configure

The configurations for both test and prod come from environment variable files that are read in before the containers are started. Copy the example files and fill in the required info:

cp test.env.example test.env
cp prod.env.example prod.env

Fill in:

  • MailJet information
  • S3 information
  • Site name
  • Admin Email (The email will appear to come from this email. You should make sure it matches the domain MailJet is configured for)
  • How many days calls will be archived for.... this is just a UI thing, you need to create some S3 rules to make sure they are deleted

Scripts

./docker-test.sh - sets up docker-compose to run with the correct environment variables for testing

./docker-prod.sh - sets up docker-compose to run with the correct environment variables for production

If you are having trouble with Docker Compose not building the images, try adding "--parallel 1". There may have been too many builds happening at once.

Service I Like

  • ๐Ÿ’ป I use Digital Ocean and they have been pretty great. If you need hosting, give them a try and use my referal code: https://m.do.co/c/402fa446f7a6 You get $100 of credit to use in 60 days.

  • ๐Ÿ’พ If you need storage, give Wasabi a try. The have been mostly reliable and you can't beat the price: https://wasabi.com

  • ๐Ÿ“จ I use Mailjet... and you need to also. It makes it easy to send out email address confirmations: https://www.mailjet.com

  • ๐Ÿ”’ I use Let's Encrypt. It works.

Local Testing

Local DNS

It helps to have similar subdomains mapping to localhost/127.0.0.1. I used the domain openmhz.test. Feel free to use this, or come up with something clever. If you do something different, make sure you use that instead in the commands below.

On MacOS, make the changes in: /etc/hosts

##
# Host Database
#
# localhost is used to configure the loopback interface
# when the system is booting.  Do not change this entry.
##
127.0.0.1       localhost
127.0.0.1       openmhz.test account.openmhz.test api.openmhz.test admin.openmhz.test media.openmhz.test

Then load these values into local DNS: sudo dscacheutil -flushcache; sudo killall -HUP mDNSResponder

  • In root dir, run: ./docker-test.sh build
  • ./docker-test.sh up -d

You can then browser to:

  • openmhz.test
  • admin.openmhz.test

Interesting Note: Safari 13+ does not like the .test TLD and doesn't seem to want to store cookies from the TLD. It seems to work fine in production when you are using a real TLD. I guess use Chrome for local testing or a different TLD for testing, like local. https://stackoverflow.com/questions/62023857/sharing-cookies-across-test-sub-domains-in-safari-13-not-possible

Debugging Node Servers using Dev Containers

VS Code makes it easy to work on code running in a Docker container - whether it is on your machine or on a remote host. The Dev Container extension lets you edit files running inside a Docker container. Install the Remote Explorer extension to work with remote hosts over SSH.

If you are editing on a remote host, first use the Remote Explorer to get to that machine.

Now, launch all of the containers using ./docker-test.sh up -d. The Account, Admin, Frontend and Backend node servers will all be started using Nodemon. When you make a change to any of the server related files, the server will reload. This makes it easy to edit and test against running code in a full deployment. To go to one of these Containers, open the Remote Explorer extension from the sidebar, select Dev Containers from the dropdown in the upper right of the sidebar, and then choose the Container you wish to work on.

Debugging React Apps using Hot Reloading

If you are trying to make changes to any of react frontends, it is a huge pain to have to compile to site and rebuild the container each time you make a change. Instead, simply run the react app in development mode. This will work for the frontends for the:

  • admin frontend
  • account frontend
  • frontend... frontend

First, start up all of the containers as described above. You will still the backend APIs they provide.

Now go into the respective sub directory for the component you are interested in and run:

source ../test.env
yarn install #only need to do this the first time, it installs the Node packages locally
yarn start

This should build the frontend and open a browser. In order to have all the cookies work correctly, you have to use the same domain name. Make sure you have setup the local domains as described above. Then goto the base domain, for me that is openmhz.test, at port 3000 openmhz.test:3000

Upgrading a "Frontend" server

https://create-react-app.dev/docs/updating-to-new-releases/

Managing MongoDB

MongoDB is used in the backend to store data. It is pretty fast, flexible and has worked well enough for me. All of the files that MongoDB uses to store the DB are in the /data directory, which gets mapped into the container. Mapping this directory makes sure that the data persists each time you run the mongo container.

Working with the MongoDB Container

From the Host OS run:

docker exec -i -t $(docker ps -a | grep mongo | awk '{print $1}') /bin/bash

Then launch the mongo CLI tool and find any users you have created

mongo
use scanner
db.users.find()

Now swap out the ObjectId for the user, and then run this command. It will make it so you don't need to confirm the email.

db.users.updateOne(
   { "_id" : ObjectId("63a620d0a63b087b005f6726") },
   {
     $set: { "confirmEmail" : true },
     $currentDate: { lastModified: true }
   }
)

There are a few scripts included with the container:

  • clean.js This script removes all Calls that are over 30 days old
  • totals.js Lists different system stats

Compact a collection

When you run clean.js it doesn't actually remove the files off storge. You can use this command from the mongo cli tool.

First, launch the tool: mongo Then switch to the scanner db: use scanner And then the compact command on the Calls collection:

db.runCommand({compact:'calls'})

This blocks all calls to the DB, so the site will not work while this is being run.

Add an Index

Adding an index will make it quicker to search calls by date, talkgroup and whether there are stars.

First, launch the tool: mongo Then switch to the scanner db: use scanner And then add an index:

db.calls.createIndex( {shortName: 1, time: -1, talkgroupNum: 1})

Upgrading MongoDB

It is a huge pain to upgrade MongoDB in place. It turns out to be easier to dump backup of the database, wipe everything out and then restore into a new database for the latest version of Mongo.

Rough Playbook (use common sense, I may not have this exact):

  • get into the shell of the mongo container
  • mongodump --uri="mongodb://127.0.0.1" --db scanner --out /data/db/backup
  • exit contianer and go back to host machine
  • cd data/db
  • rm * erase everything... but not the sub-directories because that is where the backup is
  • upgrade to the latest version of mongo
  • build and launch the mongo container, which will create an empty DB
  • get into the shell of the mongo container
  • mongorestore --db scanner --drop /data/db/backup/scanner

Privlege Port Error

Error response from daemon: driver failed programming external connectivity on endpoint trunk-server-nginx-1 (ec995a6333ce5b9869b9070e380d01b81333fdb3f2734bb93922c1ef7d84314d): Error starting userland proxy: error while calling PortManager.AddPort(): cannot expose privileged port 443, you can add 'net.ipv4.ip_unprivileged_port_start=443' to /etc/sysctl.conf (currently 1024), or set CAP_NET_BIND_SERVICE on rootlesskit binary, or choose a larger port number (>= 1024): listen tcp4 0.0.0.0:443: bind: permission denied

https://docs.docker.com/engine/security/rootless/#exposing-privileged-ports

Setting up Logging

I have had good luck with Loggly. Their free tier provides enough capabilities for most small sites. The Docker Logging Driver works well and is easy to install:

https://documentation.solarwinds.com/en/Success_Center/loggly/Content/admin/docker-logging-driver.htm

Move a site to a new server

Here is the general list of things to do:

  • Copy over ~/.aws
  • Copy over ~/.secrets
  • Do a mongodump on old machine
  • Scp the mongo backup to the new machine
  • Launch the MongoDB container on the new machine
  • Do a mongorestore
  • Launch all the conatiners
  • Change the Floating IP to point to the new machine

trunk-server's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

trunk-server's Issues

Feature: Volume Control

Personally, I would find it very helpful to have a volume control slider (preferably next to the play button) to control the volume of playback.

Password Reset Email Case Sensitivity

Just a heads up that it seems the email address in the password reset function is case sensitive. If you use a different case for the email address than you signed up with, it will respond with No account register for [email protected].

Adding a delay for certain talkgroups

This is a continuation of this issue: robotastic/trunk-recorder#331 from Trunk Recorder to cover items that could be handled server side.

I think I could add something to handle this... the basic plan would be to add a do-not-play-until field to all the call records and then make that part of the Call Fetching search. I would have to check and see what that would do for performance.

API docs

Hi, are there docs on the API anywhere?

Thanks!

Mobile Streaming

As of last night, systems are not streaming on mobile devices, only on PC

Add support for patched talkgroups

Modern trunk-recorder versions include the patched_talkgroups attribute within a call.

It would bre awesome to extend OpenMHz to support this (would also need to update the openmhz_uploader plugin in trunk-recorder), and include the call in both talkgroups.

If the current model can't easily be extended to allow a single call to exist in both TGs, could also create a copy of it in each TG.. but that would result in duplicate calls if the end user has both TGs selected.

Event "download event" issue

The Download event function has a small ease of access problem.

Because of the way files are automatically named, when a number of calls from 2 or more talkgroups are downloaded from the same event they will appear in the folder out of order.

This is due to the names assigned to the files. when downloaded the files will appear like such:

ex. mocomdps-4005-1676726457

'mocomdps' being the radio system '4005' being the assigned talkgroup number and I don't know what the third number is.
if you download an event with calls from mult. talkgroups (say TG 4000, 4005, and 4010) the 4000 talkgroups will appear in the file first due to alphabetization order then the 4005, then 4010.

So a proper sequence of calls say transmissions on 4000, 4010, 4005, 4010, 4010, and 4000 again will download as
4000, 4005, 4010, 4010, 4005, even thought this is now out of order

Add Mongo index for calls

I noticed that performance was kind of crummy on my test OpenMHz box for the calls list (older/newer), so took a look at Mongo.. and we're not creating an index for that!

I need to dig into how to do this properly with Mongoose, but in the meantime..
db.calls.createIndex( { "shortName": 1, "time": 1, "talkgroupNum": 1 } )

This takes requests from a few hundred milliseconds down to 10ms or so on my crummy test box.

FDNY Feeds Down

All the FDNY feeds are down since yesterday, except for CW-2...

Feature: Mobile - Keep screen awake

I've started to listen to OpenMHz on my cell phone (iPhone) and noted that due the lack of a constant stream (which is the nature of calls), my phone will not play new calls if my phone is not on the webpage. While this in and of itself is a feature that would be awesome, I expect it is a hard solution.

In the meantime, I propose when autoplay is enabled on a feed, that OpenMHz keeps the screen awake. This would allow new calls to be played because they webpage is still in the foreground of the cell phone.
I believe this can be done so via NoSleep.js outlined here.

Raspberry Pi

Just wanting to know if this will work on the Raspberry Pi 3B+?

Auto-scroll

It would be great if OpenMHz would auto-scroll to keep new calls at the top, similar to Broadcastify Calls.

Duplicate messages appearing

Occasionally the call list will have a duplicate message come in during or after a message has already been received. This will sandwich individual records, but then the autoplay will pass over the second duplicate and skip playing of the individual records.

image
In the above case you can see that there are a few unique messages in between these duplicates, but those won't be played as the top duplicate will be interpreted as the call that is "currently playing". It looks like this happens when calls come in with a later timestamp before messages in the regular chronological order come in. The second record comes in on the chronology where it normally should, but since it already came in it becomes a duplicate.

Also, thanks for all the work you've put into this :) It's been amazing to use!!

FDNY Brooklyn Talkgroup

During the past couple of days I'm encountering issues across all FDNY talkgroups, you hear intermittent 20 sec beeps (yeah, like that beep when they transmitting a job...)

Now I'm not getting any transmissions on the Brooklyn talkgroup except for that annoying 20 sec beep, and I know it's not a borough problem because it works all fine on broadcastify, so it's defiantly on the MHZ end

Here's a direct link to the talkgroup: https://openmhz.com/system/fdnyfire?filter-type=talkgroup&filter-code=4

County systems show as "undefined".

If you assign your system location to be a county, it shows up as undefined, STATE in the Systems list. Assigning a city or only a state seems to work correctly.

Links not working as expected

SCENARIO:
There's an incident and you go find the first call about it and use the Link feature to share it with others/for later.

Expected results:
The link (https://openmhz.com/system/system?filter-type=talkgroup&filter-code=###&time=#####) opens with the call specified selected and following calls loaded

Actual result:
The specified call is the latest one loaded and is not selected. Looking at the browser console, the calls/newer? URL is never fetched on page load. Sometimes, scrolling down and then up does get newer calls to load.

Create links to Radio Reference DB entries

Not sure this is the right place for it, but it would be nice to have links from the system descriptions to the Radio Reference entries for the system, e.g. have a place where one can input the RR ID of the system.

Spectrogram audio player

Whenever I fiddle with audio, and it's not something I can visualize directly (e.g. a gnuradio plot sink), then I'll go straight to an audio workstation/editor like Audition or Audacity.

These tools are not only great for audio analyses and mangling for testing, but they're also great for visual playback. Seeing periods of inactivity lets you seek forward to where there's audio activity again, or for visually identifying artificial alert tones - which are remarkably easy to pick out even in FTs of low sampling frequency.

Instead of the barebones audio player that exists right now, I believe having this same visual playback would be a wonderful creature comfort for end users, and one that's possible with this JavaScript library that I saw a few years ago: wavesurfer.js.

I'd imagine that modifying the player and adding this feature to just the currently chosen/playing audio is easiest, with a toggle for users to enable/disable this feature if the 8-16 kHz audio low resolution FFT is too taxing locally. This also has the benefit of no additional cost to server-side compute.

Maybe spectrograms could be generated for every call in the archive view with the FFT as a background of each call entry or something to that effect - but in order to keep processing local, this would require preloading of audio and that wouldn't be cheap on bandwidth, although it's not a far stretch from the bandwidth consumed from autoplayer-enabled views.

Feature: Support for multiple nodes grouped within one larger system

(Note, not necessarily asking you to do any work here - just opening this for discussion. I may try to implement this at some point, we'll see..)

Here in Minnesota, there is a statewide P25 system called ARMER, which almost all of the state uses for its emergency communications. There are then multiple simulcast sites. Within each site, you only get a small subset of the entire system's calls; I believe a site only gets calls for a given talkgroup if there is a radio that receives that talkgroup currently connected to that site. (There may be other roaming limits/etc too - but I know that officers can be halfway across the state and still receive the 'local' calls.)

Currently, there are two options to handle this in trunk-server:

  1. Create a separate system for each site that you have a scanner for. This is the way people are currently setting things up with OpenMHz. It works - but it means you might have to browse a bunch of different systems to find a call or see if a call is available.
  2. Set up one system for the overall P25 system, and have multiple sites upload calls to that system. I haven't tested this yet, but it should work in theory. The downsides I see are:
  • No duplicate call handling -- duplicate calls will just show up multiple times.
  • No ability to pick a specific site to listen to calls only from that site.
  • The API key is the same for the entire system -- this is fine if you have one person managing all the scanners, but if you have multiple people, you're sharing credentials.

I'm thinking it would be nice to set up the ability to do a hybrid, sort of like what Broadcastify has done with Calls..

  1. Add another tier under Systems called Sites. The Site would basically just be an API key for uploads; TGs/Sources would belong to the System.
  2. Add some sort of duplicate call detection. I like the idea of just collapsing the calls detected as "duplicate" into a list, where you can still select from the calls that are available. This would avoid the behavior I see with BC Calls where sometimes the first call recorded is junk, but other sites that attempt to upload the call get denied because it's duplicate.
  3. On the frontend, give the ability to listen to the entire system (with talkgroup filtering options/etc), or to select a given Site to listen to.

Missing filter-type=unit functionality

It appears that support for filter-type=unit was removed from the calls.js controller earlier this year, but references to this filter still exist in /backend/public/js/scanner.js and /backend/index.js. Is there a new way to filter calls by their srcList[i].src values?

Support unit ID labels

It would be great to be able to upload a unit ID CSV file similar to talkgroups.

For example, an extract of a file I have locally:

3231425,BELGRAVE HEIGHTS - FS - Pumper/Tanker Portable 02
3231424,BELGRAVE HEIGHTS - FS - Pumper/Tanker Portable 01
3231338,BORONIA - FS - Vehicle 1 Portable 01
3231330,BORONIA - FS - Tanker 1 Portable 02

Seeing which units are involved in logged calls/filtering by them would be a useful feature where that data is available.

admin.openmhz.com is overly aggressive with its character stripping in titles and descriptions

It's very hard to write a coherent description for a feed as it strips things like parenthesis, line feeds, and others.

For instance, if I type a feed title like this:

Puget Sound Emergency Radio Network (PSERN) Full

it becomes:

Puget Sound Emergency Radio Network PSERN Full

If I try to add line feeds to a feed description to separate out areas, they get removed putting all the text into one continuous blob. (Interestingly, line feeds are preserved if I edit the description, but parenthesis are stripped out completely.)

It looks like all of these characters get stripped when saving: ~!@#$%^&*()+=``~"'<>?/\|.

Surely there's a way to be less heavy-handed here by not stripping these characters out. (I'm assuming this is some sort of injection or XSS defense-in-depth, but there's plenty of mitigations for this that don't involve discarding user input.)

Autoplay misses calls that have a start time older than the one currently playing

This is easily reproducible every few minutes, on a typical station with overlapping short and long recordings.

For example, 5:32:12 PM was left unread here:

image

What seems to be happening: Recordings are being sorted by start time, and older, longer recordings are inserted before the "now playing" entry that was finished and submitted first:

  1. Receiver A starts recording at 5:32:12
  2. Receiver B starts recording at 5:32:17
  3. Receiver B finishes after 2 seconds, and uploads
  4. Player starts playing from 5:32:17
  5. Receiver A finishes after 12 seconds, and uploads

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.