schollz / find3 Goto Github PK
View Code? Open in Web Editor NEWHigh-precision indoor positioning framework, version 3.
Home Page: https://www.internalpositioning.com/doc
License: MIT License
High-precision indoor positioning framework, version 3.
Home Page: https://www.internalpositioning.com/doc
License: MIT License
Method: POST
URL: http://10.91.16.76:8005/data
{
"d":"device1",
"f":"daimler",
"t":1520424248897,
"l":"LOCATION",
"s":{
"bluetooth":{
"20:25:64:b7:91:42":-72,
"20:25:64:b8:06:38":-81,
},
"wifi":{
"20:25:64:b7:91:40":-73,
"70:4d:7b:11:3a:c8":-81,
"88:d7:f6:a7:2a:4c":-39,
"8c:0f:6f:e7:2b:78":-42,
"8c:0f:6f:e7:2b:80":-43,
"92:0f:6f:e7:2b:80":-43,
"96:0f:6f:e7:2b:78":-39,
"9e:0f:6f:e7:2b:80":-43,
"ac:9e:17:7f:38:a4":-55,
"dc:fe:07:79:aa:c0":-90,
"dc:fe:07:79:aa:c3":-89
}
},
"gps":{
"lat":12.1,
"lon":10.1,
"alt":54
}
}
RESPONSE:
tatus Code: 400 Bad Request
Access-Control-Allow-Credentials: true
Access-Control-Allow-Headers: Content-Type, Content-Length, Accept-Encoding, X-CSRF-Token, Authorization, X-Max
Access-Control-Allow-Methods: GET
Access-Control-Allow-Origin: *
Access-Control-Max-Age: 86400
Content-Length: 120
Content-Type: application/x-gzip
Date: Tue, 17 Apr 2018 10:51:02 GMT
Vary: Accept-Encoding
One of my friends built the private server as instructed on the website. He noticed the delete family command deleted the data temporary, but then a few minutes later it was repopulated with the same data that was deleted. - Is the data stored somewhere in cache or is it possible the delete family command only deleted what the a.i. learned?
It would be awesome if there was a front-end page that was the equivalent of the API call:
http://server:8003/api/v1/by_location/FAMILY_NAME
This could list:
I'm unable to start learning - I get "sensor data cannot be empty"
victorhooi@victorhooi-macbookpro ~> http POST 172.27.181.166:8005/passive family=gcc device=wifi-40:4e:36:8b:94:79 location=entrance
HTTP/1.1 200 OK
Access-Control-Allow-Credentials: true
Access-Control-Allow-Headers: Content-Type, Content-Length, Accept-Encoding, X-CSRF-Token, Authorization, X-Max
Access-Control-Allow-Methods: GET
Access-Control-Allow-Origin: *
Access-Control-Max-Age: 86400
Content-Length: 57
Content-Type: application/json; charset=utf-8
Date: Sat, 10 Mar 2018 16:24:49 GMT
{
"message": "sensor data cannot be empty",
"success": false
}
It would be awesome if we could store historical location data - e.g. how many devices, and which devices (by MAC address) were at each location, in say, 5 minute increments.
We could store this in a time-series DB like InfluxDB.
As requested per chat on Slack, I took some tshark
capture, with client devices moving between APs:
Name | MAC Address |
---|---|
Macbook Pro | ac:bc:32:81:53:eb |
Pixel Phone | 40:4e:36:0b:82:71 |
Downloaded large test file on the Macbook like so:
wget -O /dev/null http://speedtest.tele2.net/10GB.zip
Downloaded large test file on phone via browser.
Name | MAC Address |
---|---|
Cafe | f0:9f:c2:7c:66:08 |
Kid's Room | f0:9f:c2:7c:76:56 |
Main Hall (Right) | f0:9f:c2:7c:68:9c |
Main Hall (Left) | f0:9f:c2:7c:66:1b |
Upstairs | f0:9f:c2:7c:6c:1e |
I took captures on 3 Raspberry Pis with the following command:
sudo tshark -I -i wlan1 -a duration:600 -w /tmp/tshark-<location>
I set the time limit to 10 minutes - in this case, I ended up capturing for more, so I created a new file ending in "2" once the first capture ended.
Started off under Cafe AP
Fri 23 Mar 2018 06:32:39 AEDT
Moved closer to Kidsroom AP
MacBook stayed associated with cafe, Pixel Phone changed associated to Cafe AP
Fri 23 Mar 2018 06:35:38 AEDT
Moved back to Cafe AP
Fri 23 Mar 2018 06:38:18 AEDT
Moved back to Kidsroom AP
Fri 23 Mar 2018 06:41:08 AEDT
Moved back to Cafe AP
How long should we spend training each location?
30 seconds? 5 minutes? Etc.?
Can we expect accuracy to go up as we spend more time?
The logfiles can get large, especially if there's a lot of fingerprints and/or -debug
has been turned on. (I'm thinking of main.stdout
).
It may be worth looking into automatically rotating this.
Maybe something like lumberjack? Seems like an easy swap.
Alternatively, if we want to push this onto the user, we should mention this explicitly in README.md
.
I'm seeing lots of lines stating database is locked
in my find3 logfile:
2018-04-11 05:01:13 [WARN] [PID-12] analysis.go AnalyzeSensorData:117 [gcc] nb1 classify: problem preparing SQL: database is locked
2018-04-11 05:01:13 [WARN] [PID-12] analysis.go AnalyzeSensorData:117 [gcc] nb1 classify: problem preparing SQL: database is locked
2018-04-11 05:01:14 [WARN] [PID-12] analysis.go AnalyzeSensorData:117 [gcc] nb1 classify: problem preparing SQL: database is locked
2018-04-11 05:01:15 [WARN] [PID-12] analysis.go AnalyzeSensorData:117 [gcc] nb1 classify: problem preparing SQL: database is locked
2018-04-11 05:01:15 [WARN] [PID-12] server.go parseRollingData:883 [gcc] problem saving: Columns: database is locked
2018-04-11 05:01:15 [DEBUG] [PID-12] server.go parseRollingData:885 [gcc] saved reverse sensor data for wifi-48:bf:6b:18:66:76
2018-04-11 05:01:15 [DEBUG] [PID-12] server.go parseRollingData:874 [gcc] reverse sensor data: {Timestamp:1523422719868 Family:gcc Device:wifi-32:8f:a9:d2:30:ea Location: Sensors:map[wifi:map[entrance-wifi:-77]] GPS:{Latitude:0 Longitude:0 Altitude:0}}
2018-04-11 05:01:15 [WARN] [PID-12] server.go parseRollingData:883 [gcc] problem saving: Columns: database is locked
2018-04-11 05:01:15 [DEBUG] [PID-12] server.go parseRollingData:885 [gcc] saved reverse sensor data for wifi-2a:09:44:9c:fa:16
2018-04-11 05:01:15 [DEBUG] [PID-12] server.go parseRollingData:874 [gcc] reverse sensor data: {Timestamp:1523422721323 Family:gcc Device:wifi-ba:e3:39:b3:0c:0e Location: Sensors:map[wifi:map[kidsroom-wifi:-77]] GPS:{Latitude:0 Longitude:0 Altitude:0}}
2018-04-11 05:01:15 [WARN] [PID-12] analysis.go AnalyzeSensorData:117 [gcc] nb1 classify: problem preparing SQL: database is locked
2018-04-11 05:01:15 [WARN] [PID-12] server.go parseRollingData:883 [gcc] problem saving: AddSensor, execute: database is locked
2018-04-11 05:01:15 [DEBUG] [PID-12] server.go parseRollingData:885 [gcc] saved reverse sensor data for wifi-30:87:d9:b1:1d:e8
2018-04-11 05:01:15 [DEBUG] [PID-12] server.go parseRollingData:874 [gcc] reverse sensor data: {Timestamp:1523422720304 Family:gcc Device:wifi-f0:d7:aa:c5:9c:3b Location: Sensors:map[wifi:map[mainhallleft-wifi:-75 mainhallright-wifi:-79]] GPS:{Latitude:0 Longitude:0 Altitude:0}}
2018-04-11 05:01:16 [WARN] [PID-12] server.go parseRollingData:883 [gcc] problem saving: AddSensor, execute: database is locked
2018-04-11 05:01:16 [DEBUG] [PID-12] server.go parseRollingData:885 [gcc] saved reverse sensor data for wifi-5e:7e:32:e1:01:93
2018-04-11 05:01:16 [DEBUG] [PID-12] server.go parseRollingData:874 [gcc] reverse sensor data: {Timestamp:1523422717748 Family:gcc Device:wifi-d6:4c:24:15:bb:71 Location: Sensors:map[wifi:map[entrance-wifi:-73]] GPS:{Latitude:0 Longitude:0 Altitude:0}}
2018-04-11 05:01:16 [WARN] [PID-12] server.go parseRollingData:883 [gcc] problem saving: AddSensor, execute: database is locked
2018-04-11 05:01:16 [DEBUG] [PID-12] server.go parseRollingData:885 [gcc] saved reverse sensor data for wifi-16:ec:aa:9b:db:2a
2018-04-11 05:01:16 [DEBUG] [PID-12] server.go parseRollingData:874 [gcc] reverse sensor data: {Timestamp:1523422719847 Family:gcc Device:wifi-ba:12:68:ed:6b:54 Location: Sensors:map[wifi:map[entrance-wifi:-65 mainhallright-wifi:-69 kidsroom-wifi:-75]] GPS:{Latitude:0 Longitude:0 Altitude:0}}
2018-04-11 05:01:18 [WARN] [PID-12] analysis.go AnalyzeSensorData:117 [gcc] nb1 classify: problem preparing SQL: database is locked
2018-04-11 05:01:19 [WARN] [PID-12] server.go handlerReverse:812 Set: database is locked
2018-04-11 05:01:19 [INFO] [PID-12] server.go func1:971 172.27.207.39:56610 POST /passive 15.349143496s
And then in my passive scanners:
2018-04-11 05:03:24 [DEBUG] server-main.go postData:47 posting data
2018-04-11 05:03:24 [ERROR] main.go reverseCapture:122 Post http://172.27.181.166:8005/passive: dial tcp 172.27.181.166:8005: connect: connection refused
2018-04-11 05:03:24 [INFO] main.go main:79 reverse scanning with wlan1
2018-04-11 05:03:24 [DEBUG] reverse.go ReverseScan:23 saving tshark data to /tmp/tshark-JIDeBSEgjd
2018-04-11 05:03:24 [DEBUG] reverse.go ReverseScan:23 tshark -I -i wlan1 -a duration:40 -w /tmp/tshark-JIDeBSEgjd
2018-04-11 05:03:24 [DEBUG] utils.go RunCommand:14 tshark -I -i wlan1 -a duration:40 -w /tmp/tshark-JIDeBSEgjd
Is it possible I've hit some limit on Sqlite3?
I have 5 passive scanners (running on Raspberry Pis).
For the public server. $ curl https://cloud.internalpositioning.com/api/v1/mqtt/FAMILY
Expected:
{"message":"Added 'FAMILY' for mqtt. Your passphrase is 'XX'","success":true}
Actual:
404 page not found%
(This is not an issue for the docker containers.)
I noticed there is a way to delete an entire family, but is there a way to delete one particular room? While training, the room mate forgot to close the app out of training phase and walked all over the house for half a day before realizing what he did. The other readings are good I just want to delete one particular room. Ideally, this would be done by clicking something on the dashboard, but a button built into the app would be nice too.
One of my friends built the private server as instructed on the website. He noticed the delete family command deleted the data temporary, but then a few minutes later it was repopulated with the same data that was deleted. - Is the data stored somewhere in cache or is it possible the delete family command only deleted what the a.i. learned?
Does the 10 second timeout happen only during the pull room request or does it also apply during the training phase? I'm ok with training taking longer if it results in more accurate results. I do not want the pull room request to take long though.
Also, does the a.i. progressively get smarter with each calibration? I noticed that one day all the rooms were in the blue then three days later 3 rooms were in red and 4 were in green. I know when it calibrates it keeps 30% of the data separate to run as a test, but still these numbers seem unusual. What even more unusual is that the real results appear to be much less accurate than the numbers produced by the dashboard.
Does the 10 second timeout happen only during the pull room request or does it also apply during the training phase? I'm ok with training taking longer if it results in more accurate results. I do not want the pull room request to take long though.
The original Find app appears to be significantly more accurate currently during real world test. I am wondering if it takes the a.i. a few days to figure out what to prioritize?
The dashboard needs a little work. I have no idea how it is possible to have 0% accuracy on 56 readings.
Also, the number of data readings stored is always directly below the first character. This look great until you name a room something large. I.e. Beer pong room. Then the number of data readings directly overlap the accuracy readings.
If I should had made multiple issue reports for each of the issues, let me know. I will be happy to separate them. I just didn't want the system to suspect I was spamming the gethub account by sending multiple issues.
Is there any chance we can expect any neural networking code anytime soon? I am debating on if this is considered a.i. or just computer learning.
I have created a very experimental, early stage client for the ESP32 / ESP8266 microcontroller, which is available from $1. This should allow people to create very cheap beacons.
Update: see https://github.com/DatanoiseTV/esp-find3-client
Please note this is a very early stage and experimental thing. Any input welcome!
Being able to select a timeframe, or go back in time to see all devices at a certain point in time would be a very useful feature for creating statistics after an ended event, if the server is used for that.
We should add the following fields to each device in by_location
(and similar location endpoints):
first_seen
: Timestamp, for when we first saw this deviceactive_mins
: Integer, how many minutes this device has been present for (i.e. active) - if it's seen in two consecutive scans, we assume it was active for the entire period in between as wellEven though we can currently determine the last known location of a client by sending a request to GET /api/v1/location/FAMILY/DEVICE
, it would be useful to have a simplified API endpoint e.g. GET /api/v1/location_min/FAMILY/DEVICE
which just returns the name and probability of the most probable last location. This would save a lot of memory on embedded devices by avoiding unnecessary JSON parsing.
Can I run this on a raspberry pi?
I did try but I got this error:
pi@officepi:~/find3/find3 $ sudo docker build -t find3 .
Sending build context to Docker daemon 14.9MB
Step 1/32 : FROM ubuntu:18.04
---> 7295fb90f21f
Step 2/32 : RUN apt-get update
---> Running in 351b1a998416
The command '/bin/sh -c apt-get update' returned a non-zero code: 139
I'm wondering if this is related to the pi's armv6l architecture or possibly a memory limitation?
Any ideas? Thanks!
The README.md file here:
https://github.com/schollz/find3/blob/d7fa365637509bf31f34167bc08cfc203d687663/docs/README.md
refers to the "www.find3.com" domain. However, this appears to be a squatted domain:
I believe the URL should be https://find3.internalpositioning.com/
When registering device (from Android app) the link family name is turned to lowercase, regardless of how its actually entered in the app.
At the same time server component does distinguish unique family names with case sensitivity.
Example: Family Name is LoWeRcAsE
:
Android app will lowercase it to added x bluetooth and x wifi points for lowercase/devicename at location
server would then work with https://cloud.internalpositioning.com/view/dashboard/lowercase
instead of desired https://cloud.internalpositioning.com/view/dashboard/LoWeRcAsE
.
Does the 10 second timeout happen only during the pull room request or does it also apply during the training phase? I'm ok with training taking longer if it results in more accurate results. I do not want the pull room request to take long though.
The documentation should include a list of all the API and front-end endpoints:
I believe these are all in server.go
, right?
I'm happy to take a stab at this, if you provide some pointers on how you'd like it done =).
The learn.sh file referenced from https://www.internalpositioning.com/doc/server_setup.md attempts to connect to the docker container using port 8003 instead of 8005 (as specified earlier in the same document docker run statement).
Attached is a patch file that has been verified.
Has anyone tried to fiddle with capturing BT Low Energy packets with this code? Where would I look to start hacking this in? I have a Raspberry Pi Zero W and it sniffs the packets great with hcitool, so seems like it could be a nice addition for near real-time proximity detection. No latency/overhead associated with the regular BT or AP polling.
on passive scanning page it says that learning can be sped up by using the android scanner app. what mode do you use it in - scanning or learning?
And I'm assuming that i should make the name of the device on the android scanner the same name as the device name that the scanning computers are learning about - e.g., wifi-XX:XX:XX:XX:XX:XX where XX:XX:XX:XX:XX:XX is the wifi mac address of the phone.
i used learning mode on the android scanner while the scanning computers (i have 2 rpi3's) were in learning mode. It's a little odd - the android scanner crashes (or more accurately, the gui crashes) but as i monitor my mosquitto server, i can still see all of the learning data get published to mqtt. But when i stop learning on the scanning computers and start passive scanning, the results are wrong. i learned 2 rooms as described above, and then went to 1st learned room, and did a force stop on the android scanner app , put the scanning computer back to scan mode, and when monitoring the mosquitto traffic, it's reporting that the phone is in the 2nd learned room. If i shut down passive scanning (sudo kill -9 on the rpi for the find3-cli-scanner) and turn the android scanner on the phone on, the mosquitto data shows the correct room. (& similarly to above, if i leave the passive scanner running on the rpi's and turn on the android scanner on the phone, the android scanner gui will crash, but mosquitto is still showing scanning data from the phone).
Does it sound like i'm doing this correctly? if so, any ideas on what to check on to see what might be going wrong w/ passive scanning?
thx, jay
Is it possible to use my own MQTT server for FIND3 server?
I already have a Mosquitto server running for my home automation system.
With FIND I could use the Zanzito app for tracking, which is already present on our smartphones.
If the server dies (or gets restarted), the find3-cli-scanner
instances are fine - they will keep sending output to /dev/null
, but should pick up once the server comes back on.
May be worth mentioning this in the docs?
Hello and thank you so much for this awesome project! ๐ฏ
I have been reading the Docs and I have just got some dubts about starting developing an app on top of find3.
Could be awesome to have some getting started tutorial or a real app as example to understand how to use find3 nicely and smoothly.
Do you have any of these types of resources? Do you think is something that should be added in the future?
Thank you again!
wondering how the find3 algorithms work with wifi mesh systems such as orbi (not really true mesh), velop, etc. does each node in the mesh constitute a separate physical wifi access point from a find3 perspective? in other words are the wifi fingerprints a result of the distance & signal calculation to each individual wifi mesh node (as if they were independent wifi access points) or do the distance/signal calculations treat the entire mesh (group of wifi APs) as a single entity?
i'm curious because i am encountering some resolution issues that i don't seem to see with find version 2 and am wondering if this is related or not. For example, overall it seems to take longer for find3 to converge on location changes - in particular, when i change locations between 2 rooms that each have a wifi mesh node, it's almost as if it doesn't detect the change until i make the find3 scanner on my android the front most app on the phone. if i leave it in the background, it doesn't seem to detect the location change - or it takes a really long time (minutes). Find 2 in comparison, picked up changes almost immediately - but it also had erroneous changes. once, find 3 has settled on what is the correct location, it doesn't seem to fluctuate.
I haven't tested this exhaustively, but the wifi mesh made me wonder. Maybe it's just a learning issue still? (although my accuracy overall is 98% and none individual locations are less than 92%)
thx, jay
The by_location
field has a num_scanners
field which tells us how many Find3 scanners saw this device.
It would be useful to allow by_location
to filter on this field (e.g. only show devices seen by 2 or more scanners), as well as any other location API endpoints.
We know the MAC address for each device.
The first three octets tell us the vendor (e.g. https://www.macvendorlookup.com/).
We can expose this in by_location
as a field per device.
I have big data in another sqlite database (Find uncompatible) and it takes a while to put it into the Find database via request because its learning in each request
I need only api.SaveSensorData(p) and i will learn the database after it will be complete..
So maybe put some flag for it ?
Thanks Mike
With the previous Find you could take advantage of monitoring WiFi card and setup pretty neat passive tracker for devices.
Does this work the same way in Find3 ? How would I go about to setting that up (I have 4x RasPiZeroW handling the Find2 atm)?
I have a find3 server running on a Google Compute Engine instance - n1-standard-1 (1 vCPU, 3.75 GB memory)
.
I have five Raspberry PIs that are scanning, and sending fingerprints to this server. I have done training, and added around 8 locations.
I'm using docker stats find3server
to track memory usage of find3server
.
The memory usage starts out at around 193 MB. However, there seem to be two issues.
Firstly, over time the Docker instance appears to consume memory (and not release it), and eventually consumes all machine and crashes the host machine.
Secondly, if you call the http://server:8003/api/v1/by_location/family_name endpoint, this seems to accelerate the process - it seems to add a few hundred MB each time.
For example - initial memory:
Then, calling http GET SERVER_IP_ADDRESS:8003/api/v1/by_location/gcc
- new screenshot of docker stats find3server
after each call:
However, even just leaving the machine alone, the memory usage climbs on its own.
Also, each request against api/v1/by_location/<family>
takes longer and longer.
I had to change the httpie timeout from the default of 30 seconds, to 5 minutes, then 10 minutes, to let it complete. After around 5 calls, it would no longer complete even after 10 minutes.
Also - even after restarting the entire machine, and starting find3server
up again - api/v1/by_location still won't complete within 10 minutes =(.
I'm still studying the documentation, but one thing that was not clear to me is whether there is a possibility to scan all the devices next to the scanners and not just the "registered"
For the ESP8266, there seems to be no reliable way to get Unix time in microseconds.
At least, I had no success with NTP.
@schollz Could you provide an API-Endpoint like /time which returns Unix server time in microseconds? Currently I am running my own microservice for it: https://unixtimeservice.herokuapp.com/
What motivated you to change the database from Boltdb to Sqlite? I am considering to do the same in a project of mine and I would like to know what were your reasons to do it.
Thanks in advance.
It would be nice if there was a frontend page on the server (and/or an API) that listed all the connected find3-cli-scanner
instances.
This could list:
I assume these would need to be persisted somewhere, right?
It's useful to quickly check if things connected up properly, and also keep an eye on things in the field.
The docs should mention what sort of Wifi packets are being used for passive scanning.
For example - is it only probe requests? (These are every 10 minutes, right?)
Or how about general wifi traffic?
We should also make some notes about how often we'd expect to see devices check-in etc.
"Post http://localhost:8002/classify: net/http: request canceled (Client.Timeout exceeded while awaiting headers)" sometimes occur when http://127.0.0.1:8005/api/v1/by_location/magnetometer_and_wifi04
Hello, Sir
Could you make the api for deleting the location?
We should allow setting a timezone for each family (aka group).
This means we could print out last-seen timestamps in local time, including TZ code if we wanted.
(See schollz/find#173)
We should have the ability fo filter out randomised MAC addresses. (e.g. from iOS 8+, Android 6.0+, Windows 10 etc.).
I believe on iOS, they set the U/L bit to 1, so that should be reasonably easy to filter out?
https://en.wikipedia.org/wiki/MAC_address#Universal_vs._local
Universally administered and locally administered addresses are distinguished by setting the second-least-significant bit of the first octet of the address. This bit is also referred to as the U/L bit, short for Universal/Local, which identifies how the address is administered. If the bit is 0, the address is universally administered. If it is 1, the address is locally administered. In the example address 06-00-00-00-00-00 the first octet is 06 (hex), the binary form of which is 00000110, where the second-least-significant bit is 1. Therefore, it is a locally administered address.
At the API, when we return a location, it could have a boolean, "randomised". E.g.:
...
"toilets": [
{
"device": "wifi-ae:7e:d9:e0:6e:40",
"probability": 0.19580419580419584,
"timestamp": "2018-03-07T05:56:28.669Z",
"randomised": True
},
{
"device": "wifi-3e:ce:41:a2:ea:58",
"probability": 0.23326229508196722,
"timestamp": "2018-03-07T05:56:28.598Z"
"randomised": False
},
{
"device": "wifi-ec:b1:d7:5f:03:0a",
"probability": 0.23577049180327866,
"timestamp": "2018-03-07T05:56:28.177Z"
"randomised": True
}
]
},
"message": "got locations",
"success": true
}
And for the React frontend - we can have a slider button to show/hide randomised devices.
The Find3 server Docker image is quite large (around 1 GB).
I believe there is room to shrink this.
Advantages:
Alpine Linux is the default base image used by Docker - and might be worth looking into.
(One of the reasons it's on Ubuntu 18.04 is for the latest Bluez - so we'll need to make sure this is still working. However, I believe this is only for the scanning machines.)
I noticed some of the packages may or may not be needed?
Below is from the current Dockerfile
RUN DEBIAN_FRONTEND=noninteractive apt-get install -y git wget curl vim g++ sqlite3 mosquitto-clients mosquitto python3 python3-dev python3-pip python3-scipy python3-flask python3-sklearn python3-numpy golang supervisor
curl
and wget
?vim
may not be required inside the Docker imageCurrent ai/requirements.txt
:
The by_location
endpoint has a num_scanners
field which tells us how many Find3 scanners saw this device.
The React frontend page (e.g. http://localhost:8005/view/location/family_name/wifi-02:9f:c2:7d:66:08) should also expose this field as well.
With regard to https://cloud.internalpositioning.com, this would mean that family names should be kept secret.
Many embedded wireless platforms have enough resources to run a few tasks in the background. By building a minimal, single binary stand-alone version for MIPS and ARM, wireless routers could be used to gain passive and active data.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.