Giter Club home page Giter Club logo

spark-protocol's Introduction

Changelog

0.1.5 - cleaning up connection key logging, adding early data patch 0.1.4 - adding verbose logging option 0.1.3 - adding OTA size workaround 0.0.2 - Working alpha version, needs refactor for API wrapper 0.0.1 - Inital imports and cleanup of base classes

spark-protocol's People

Contributors

antonpuko avatar brahma-dev avatar dmiddlecamp avatar durielz avatar flaz83 avatar haeferer avatar jinyangqiao avatar jlkalberer avatar straccio avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

spark-protocol's Issues

Photon in OTA loop when upgrading from 0.6.3 to 1.4.0

Hello,

when I flash a user code compiled against 1.4.0 to Photon running system firmware 0.6.3, the server tries to OTA upgrade the system firmware. The problem is that it ends up in a loop, never getting to upgrade the Photon.

I tried first flashing a user code for 0.7.0, in this case the upgrade went smooth, then flashing code for 1.4.0, again smooth upgrade. But direct 0.6.3->1.4.0 doesnt work.

Tested on multiple devices.

(question) Is this compatible with the Electron as well ?

Hi!

Just wondering if this is compatible with the electron as well.

I have been looking at ways to build my own backend (similar to what particle provides) but hosted at AWS.

I have been reading the discussion here

https://community.particle.io/t/local-cloud-a-k-a-spark-server-updates/26792/21

as well.

I understand this project provides OTA, events and functions as does the official/paid particle cloud offers. Is this correct ?

Thanks and sorry if this is not the right place to ask.

DeviceServer Clustering

Use clustering to improve speed. We will want to do this at the HTTP server and allow it through settings on the COAP server (spark-protocol). We'll need a singleton way of passing events so you'll need to update EventPublisher to send data between the forks.

Spark-protocol - Start with https://nodejs.org/api/cluster.html#cluster_cluster and https://gist.github.com/dsibilly/2992412


This should be optional and only turn on if the user wants it since by default spark-protocol uses the file-based repositories + caching.
This should automatically balance the devices between the forks. I don't really know a good way to do this :/ -- maybe this is a follow-up TODO

Errors on device reconnection after server shutdown

If I shutdown the server with CTRL-C, I see a stream of the following error as the devices reconnect:

Error: No device state!
at Device._callee5$ (/opt/particle/brewskey/spark-server/node_modules/spark-protocol/dist/clients/Device.js:661:21)
at tryCatch (/opt/particle/brewskey/spark-server/node_modules/regenerator-runtime/runtime.js:64:40)
at GeneratorFunctionPrototype.invoke [as _invoke] (/opt/particle/brewskey/spark-server/node_modules/regenerator-runtime/runtime.js:355:22)
at GeneratorFunctionPrototype.prototype.(anonymous function) [as throw] (/opt/particle/brewskey/spark-server/node_modules/regenerator-runtime/runtime.js:116:21)
at step (/opt/particle/brewskey/spark-server/node_modules/babel-runtime/helpers/asyncToGenerator.js:17:30)
at /opt/particle/brewskey/spark-server/node_modules/babel-runtime/helpers/asyncToGenerator.js:30:13

I have not discovered any failures stemming from this error.

subscribe events come before new device attributes are saved.

When we connect a device for the first time, handshake automatically save public key. then _onDeviceReady invoked. It should save deviceAttribues if they don't exist
(https://github.com/Brewskey/spark-protocol/blob/dev/src/server/DeviceServer.js#L254-L281)
...but since the operation takes some time, this error:
https://github.com/Brewskey/spark-protocol/blob/dev/src/server/DeviceServer.js#L332 is caught before.

I guess we can't delay devices events, so the only fix I see for this is: get rid of autosaving publicKey on handshake:
https://github.com/Brewskey/spark-protocol/blob/dev/src/lib/Handshake.js#L300-L305

Bunyan Log finished

But unable to commit (without disabled pre-commit rule).
reason:
missing files from third-party directory

Add Babel

Add babel to this and make sure that the spark-server can still consume this once we are adding flow.

Transform stream error(cipher/decipher)

sometimes I see this expection. idk in what conditions, since it happens very rare:

uncaughtException { 
message: 'no writecb in Transform class',
stack: 'Error: no writecb in Transform class\n    at afterTransform (_stream_transform.js
:71:33)\n    at TransformState.afterTransform (_stream_transform.js:54:12)\n    at _combine
dTickCallback (internal/process/next_tick.js:67:7)\n    at process._tickDomainCallback (int
ernal/process/next_tick.js:122:9)' }

FirmwareManager not working with P1s

Cannot read property 'binaryFileName' of undefined TypeError: Cannot read property 'binaryFileName' of undefined
    at _callee$ (/opt/particle/brewskey/spark-server/node_modules/spark-protocol/dist/lib/FirmwareManager.js:177:104)
    at tryCatch (/opt/particle/brewskey/spark-server/node_modules/regenerator-runtime/runtime.js:64:40)
    at GeneratorFunctionPrototype.invoke [as _invoke] (/opt/particle/brewskey/spark-server/node_modules/regenerator-runtime/runtime.js:355:22)
    at GeneratorFunctionPrototype.prototype.(anonymous function) [as next] (/opt/particle/brewskey/spark-server/node_modules/regenerator-runtime/runtime.js:116:21)
    at step (/opt/particle/brewskey/spark-server/node_modules/babel-runtime/helpers/asyncToGenerator.js:17:30)
    at /opt/particle/brewskey/spark-server/node_modules/babel-runtime/helpers/asyncToGenerator.js:35:14
    at Promise.F (/opt/particle/brewskey/spark-server/node_modules/core-js/library/modules/_export.js:35:28)
    at Function.<anonymous> (/opt/particle/brewskey/spark-server/node_modules/babel-runtime/helpers/asyncToGenerator.js:14:12)
    at Function.getOtaSystemUpdateConfig (/opt/particle/brewskey/spark-server/node_modules/spark-protocol/dist/lib/FirmwareManager.js:192:18)
    at DeviceServer._callee4$ (/opt/particle/brewskey/spark-server/node_modules/spark-protocol/dist/server/DeviceServer.js:446:60)
    at tryCatch (/opt/particle/brewskey/spark-server/node_modules/regenerator-runtime/runtime.js:64:40)
    at GeneratorFunctionPrototype.invoke [as _invoke] (/opt/particle/brewskey/spark-server/node_modules/regenerator-runtime/runtime.js:355:22)
    at GeneratorFunctionPrototype.prototype.(anonymous function) [as next] (/opt/particle/brewskey/spark-server/node_modules/regenerator-runtime/runtime.js:116:21)
    at tryCatch (/opt/particle/brewskey/spark-server/node_modules/regenerator-runtime/runtime.js:64:40)
    at GeneratorFunctionPrototype.invoke [as _invoke] (/opt/particle/brewskey/spark-server/node_modules/regenerator-runtime/runtime.js:297:24)
    at GeneratorFunctionPrototype.prototype.(anonymous function) [as next] (/opt/particle/brewskey/spark-server/node_modules/regenerator-runtime/runtime.js:116:21)
    at step (/opt/particle/brewskey/spark-server/node_modules/babel-runtime/helpers/asyncToGenerator.js:17:30)
    at /opt/particle/brewskey/spark-server/node_modules/babel-runtime/helpers/asyncToGenerator.js:28:13
    at process._tickDomainCallback (internal/process/next_tick.js:129:7)

Connected Devices shown can be used as a configuration item.

Hi:
when I use my local spark-server,I found that the log below is always show,but I don't wan't to this to be shown so frequently;
[2019-02-20T10:59:04.459Z] INFO: DeviceServer.js/15464 on DTIT-YJin: Connected Devices (devices=0, sockets=0)
I think this can be used as a configuration item.
setInterval( (): void => server.getConnections((error: Error, count: number) => { logger.info( { devices: this._devicesById.size, sockets: count }, 'Connected Devices', ); }), 10000, );

CryptoManager TODO

Right now we ignore if a user wants to password protect their keys.

We should allow a user to pass in a password through constitute

When I monitor the success message from ‘spark/flash/status’, my P1 device is still performing the update operation and it will take a long time to succeed.

Hi ,
When I perform the ‘/v1/devices/:deviceID’ operation by put and update my P1 device,
I found that when I monitor the 'success' message from ‘spark/flash/status’, my P1 device is still performing the update operation and it will take a long time to succeed,about 10 to 20 seconds.
I want to know whether the data of ‘spark/flash/status’ is synchronized with the real update status of the P1 device?
Thanks very much!

support for System.disableUpdates() ?

Currently, the spark-protocol doesn't support OTA disable/enable functions on the device. When the OTA updates are disabled via calling System.disableUpdates(), the cloud tries to push OTA update, but fails and drops the connection to the device. The cloud doesn't recognize that the updates are disabled, dropping connections and trying OTA again - happening in a loop until the updates are enabled on the device again via System.enableUpdates().

We would like to disable updates for a couple of minutes after the startup of a device when there is a higher risk that the user will power-off the device again (tinkering with it). While the Photon should theoretically handle this, we've seen several cases (on a 700 devices fleet) when this exact thing resulted in the Photon getting stuck in OTA update - needed a manual flashing of the firmware over USB.

This is output from the cloud when trying to OTA to a device with disabled updates:

{"name":"Device.js","hostname":"DXXXX","pid":2137,"level":30,"deviceID":"3b002f000147393038000000","msg":"flash device started! - sending api event","time":"2021-04-30T08:26:10.917Z","v":0} {"name":"Flasher.js","hostname":"DXXXX","pid":2137,"level":30,"cache_key":"_6543","deviceID":"3b002f000147393038000000","msg":"fast ota enabled! ","time":"2021-04-30T08:26:10.917Z","v":0} {"name":"Device.js","hostname":"DXXXX","pid":2137,"level":30,"cache_key":"_6543","deviceID":"3b002f000147393038000000","duration":5.652,"disconnectCounter":1,"msg":"Device disconnected","time":"2021-04-30T08:26:16.437Z","v":0} {"name":"DeviceServer.js","hostname":"DXXXX","pid":2137,"level":40,"connectionKey":"_6543","deviceID":"3b002f000147393038000000","ownerID":null,"msg":"Session ended for Device","time":"2021-04-30T08:26:16.438Z","v":0} {"name":"Device.js","hostname":"DXXXX","pid":2137,"level":30,"deviceID":"3b002f000147393038000000","msg":"Releasing flash ownership","time":"2021-04-30T08:26:16.438Z","v":0} {"name":"Device.js","hostname":"DXXXX","pid":2137,"level":50,"deviceID":"3b002f000147393038000000","msg":"Flash device failed! - sending api event","time":"2021-04-30T08:26:16.439Z","v":0} {"name":"DeviceServer.js","hostname":"DXXXX","pid":2137,"level":50,"deviceID":"3b002f000147393038000000","err":{"message":"Cannot read property 'message' of undefined","name":"TypeError","stack":"TypeError: Cannot read property 'message' of undefined\n at Device._callee9$ (/usr/local/bin/spark-server/node_modules/spark-protocol/dist/clients/Device.js:1017:66)\n at tryCatch (/usr/local/bin/s park-server/node_modules/regenerator-runtime/runtime.js:62:40)\n at Generator.invoke [as _invoke] (/usr/local/bin/spark-server/node_modules/regenerator-runtime/runtime.js:296:22)\n at Generator.prototype.(anonymous function) [as throw] (/usr/local/bin/spark-server/node_modules/regenerator-runtime/runtime.js:114:21)\n at step (/usr/local/bin/spark-server/node_modules/babel-runtime/helpers/asyncToGenerator.js:17:30)\n at /usr/local/bin/spark-server/node_modules/babel-runtime/helpers/asyncToGenerator.js:30:13\n at process._tickCallback (internal/process/next_tick.js:68:7)"},"msg":"Connection Error","time":"2021-04-30T08:26:16.439Z","v":0}

pkcs decoding error

when I put my P1 to the local cloud,I found that the P1 can't connnect to the server,and have an error :
Original error: Error: error:0407109F:rsa routines:RSA_padding_check_PKCS1_type_2:pkcs decoding error
from node-rsa\src\NodeRSA.js:301:19.

PS:the code is the b971834a34ab2f67031bd0c24c851716abf77cbd

npm install error

when I git the newest spark-server dev ,
run npm install,at first ,it seems everything is ok,until it comes to update-firmware
and Downloading voodoospark.bin, something goes wrong like this:

Downloading photon_tinker.bin...
Downloading tinker-usb-debugging-v0.4.8-rc.6-electron.bin...
Downloading voodoospark.bin...
internal/streams/legacy.js:59
throw er; // Unhandled stream error in pipe.
^

Error: connect ETIMEDOUT 151.101.228.133:443
at Object.exports._errnoException (util.js:1024:11)
at exports._exceptionWithHostPort (util.js:1047:20)
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1150:14)
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! [email protected] postinstall: update-firmware
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the [email protected] postinstall script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.

npm ERR! A complete log of this run can be found in:
npm ERR! C:\Users\me\AppData\Roaming\npm-cache_logs\2018-05-16T10_15_00_098Z-debug.log

call device function error

TypeError: Cannot read property 'length' of undefined
    at prepareOptions (D:\PROG\brewskey\spark-protocol\node_modules\coap-packet\index.js:400:30)
    at Object.generate (D:\PROG\brewskey\spark-protocol\node_modules\coap-packet\index.js:37:13)
    at Function._class.wrap (D:/PROG/brewskey/spark-protocol/dist/lib/CoapMessages.js:218:31)
    at Device._this.sendMessage (D:/PROG/brewskey/spark-protocol/dist/clients/Device.js:599:44)
    at Device._callee8$ (D:/PROG/brewskey/spark-protocol/dist/clients/Device.js:857:31)

HTTP 500 Error on Updating Firmware

If you first build a SparkProtocol

npm run update-firmware

sometimes (depending on your internet connection, or DOS detection) the process fails with Timeout 500 Errors.

Reason: The Script starts every download at the same time (> 220 of them). So if your internetconnection is not fast enough (or github cools you down because of DOS Attack). Some of them fail

Workaround:
Run the command until there is no error left

Note:
This does not work very well, if you install spark-protocol as a dependency of spark-server

Solution

i've created a patch to download the file step by step (with a max. amount of parallel downloads)

I will make a PR from https://github.com/keatec/spark-protocol

Remove uuid and use the npm package

We should be using the npm package for this. Use the v4 implementation.

Also, when a uuid is generated we should probably figure out a way to check if it's in use already. There should never be a collision but we can't be 100% sure.

Cant commit cause of missing files for lint

the third-party directory seems to be missing

  7:22  error  Unable to resolve path to module '../../third-party/settings.json'   import/no-unresolved
  8:28  error  Unable to resolve path to module '../../third-party/specifications'  import/no-unresolved
  8:28  error  Missing file extension for "../../third-party/specifications"        import/extensions
  9:22  error  Unable to resolve path to module '../../third-party/versions.json'   import/no-unresolved

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.