Giter Club home page Giter Club logo

doods's Introduction

DEPRECATED!!!

DOODS is now deprecated in favor of DOODS2... Now with more Python...

https://github.com/snowzach/doods2

DOODS

Dedicated Open Object Detection Service - Yes, it's a backronym...

DOODS is a GRPC/REST service that detects objects in images. It's designed to be very easy to use, run as a container and available remotely.

API

The API uses gRPC to communicate but it has a REST gateway built in for ease of use. It supports both a single call RPC and a streaming interface. It supports very basic pre-shared key authentication if you wish to protect it. It also supports TLS encryption but is disabled by default. It uses the content-type header to automatically determine if you are connecting in REST mode or GRPC mode. It listens on port 8080 by default.

GRPC Endpoints

The protobuf API definitations are in the odrpc/odrpc.proto file. There are 3 endpoints.

  • GetDetector - Get the list of configured detectors.
  • Detect - Detect objects in an image - Data should be passed as raw bytes in GRPC.
  • DetectStream - Detect objects in a stream of images

REST/JSON

The services are available via rest API at these endpoints

  • GET /version - Get the version
  • GET /detectors - Get the list of configured detectors
  • POST /detect - Detect objects in an image

For POST /detect it expects JSON in the following format.

{
	"detector_name": "default",
	"data": "<base64 encoded image information>",
  "file": "<image filename (instead of data)>
	"detect": {
		"*": 50
	}
}

The result is returned as:

{
    "id": "test",
    "detections": [
        {
            "top": 0,
            "left": 0.05,
            "bottom": .8552,
            "right": 0.9441,
            "label": "person",
            "confidence": 87.890625
        }
    ]
}

You can specify regions for specific detections: For POST /detect it expects JSON in the following format. If you specify covers than the detection region must completely cover the region you specify. If covers is false, if any detection is inside any part of the region it will trigger.

{
	"detector_name": "default",
	"data": "<base64 encoded image information>",
  "file": "<image filename (instead of data)>",
  "regions": [
    {
      "top": 0,
      "left": 0,
      "bottom": 1,
      "right": 1,
      "detect": {
        "person": 50,
        "*": 90
      },
      "covers": true
    }
  ]
}

This will perform a detection using the detector called default. (If omitted, it will use one called default if it exists) The data, when using the REST interface is base64 encoded image data. DOODS can decode png, bmp and jpg. You can also pass file in place of data to read the file from the machine DOODS is running on. file will override data. The detect object allows you to specify the list of objects to detect as defined in the labels file. You can give a min percentage match. You can also use "*" which will match anything with a minimum percentage.

Example 1-Liner to call the API using curl with image data:

echo "{\"detector_name\":\"default\", \"detect\":{\"*\":60}, \"data\":\"`cat grace_hopper.png|base64 -w0`\"}" > /tmp/postdata.json && curl -d@/tmp/postdata.json -H "Content-Type: application/json" -X POST http://localhost:8080/detect

Another example 1-Liner specifying a region:

echo "{\"detector_name\":\"default\", \"regions\":[{\"top\":0,\"left\":0,\"bottom\":1,\"right\":1,\"detect\":{\"person\":40}}], \"data\":\"`cat grace_hopper.png|base64 -w0`\"}" > /tmp/postdata.json && curl -d@/tmp/postdata.json -H "Content-Type: application/json" -X POST http://localhost:8087/detect

Detectors

You should optimally pass image data in the requested size for the detector. If not, it will be automatically resized. It can read BMP, PNG and JPG as well as PPM. For detectors that do not specify a size (inception) you do not need to resize

TFLite

If you pass PPM image data in the right dimensions, it can be fed directly into tensorflow lite. This skips a couple steps for speed. You can also specify hwAccel: true in the config and it will enable Coral EdgeTPU hardware acceleration. You must also provide it an appropriate EdgeTPU model file. There are none included with the base image.

Compiling

This is designed as a go module aware program and thus requires go 1.12 or better. It also relies heavily on CGO. The easiest way to compile it is to use the Dockerfile which will build a functioning docker image. It's a little large but it includes 2 models.

Configuration

The configuration can be specified in a number of ways. By default you can create a json file and call it with the -c option you can also specify environment variables that align with the config file values.

Example:

{
	"logger": {
        "level": "debug"
	}
}

Can be set via an environment variable:

LOGGER_LEVEL=debug

Options:

Setting Description Default
logger.level The default logging level "info"
logger.encoding Logging format (console or json) "console"
logger.color Enable color in console mode true
logger.disable_caller Hide the caller source file and line number false
logger.disable_stacktrace Hide a stacktrace on debug logs true
--- --- ---
server.host The host address to listen on (blank=all addresses) ""
server.port The port number to listen on 8080
server.tls Enable https/tls false
server.devcert Generate a development cert false
server.certfile The HTTPS/TLS server certificate "server.crt"
server.keyfile The HTTPS/TLS server key file "server.key"
server.log_requests Log API requests true
server.profiler_enabled Enable the profiler false
server.profiler_path Where should the profiler be available "/debug"
--- --- ---
pidfile Write a pidfile (only if specified) ""
profiler.enabled Enable the debug pprof interface "false"
profiler.host The profiler host address to listen on ""
profiler.port The profiler port to listen on "6060"
--- --- ---
doods.auth_key A pre-shared auth key. Disabled if blank ""
doods.detectors The detector configurations

TLS/HTTPS

You can enable https by setting the config option server.tls = true and pointing it to your keyfile and certfile. To create a self-signed cert: openssl req -new -newkey rsa:2048 -days 3650 -nodes -x509 -keyout server.key -out server.crt You will need to mount these in the container and adjust the config to find them.

Detector Config

Detector config must be done with a configuration file. The default config includes one Tensorflow Lite mobilenet detector and the Tensorflow Inception model. This is the default config with the exception of the threads and concurrent are tuned a bit for the architecture they are running on.

doods:
  detectors:
    - name: default
      type: tflite
      modelFile: models/coco_ssd_mobilenet_v1_1.0_quant.tflite
      labelFile: models/coco_labels0.txt
      numThreads: 4
      numConcurrent: 4
      hwAccel: false
      timeout: 2m
    - name: tensorflow
      type: tensorflow
      modelFile: models/faster_rcnn_inception_v2_coco_2018_01_28.pb
      labelFile: models/coco_labels1.txt
      numThreads: 4
      numConcurrent: 4
      hwAccel: false
      timeout: 2m

The default models are downloaded from google: coco_ssd_mobilenet_v1_1.0_quant_2018_06_29 and faster_rcnn_inception_v2_coco_2018_01_28.pb

default/tflite model labels

tensorflow model labels

The numThreads option is the number of threads that will be available for compatible operations in a model The numConcurrent option sets the number of models that will be able to run at the same time. This should be 1 unless you have a beefy machine. The hwAccel option is used to specify that a hardware device should be used. The only device supported is the edgetpu currently If timeout is set than a detector (namely an edgetpu) that hangs for longer than the timeout will cause doods to error and exit. Generally this error is not recoverable and Doods needs to be restarted.

Detector Types Supported

  • tflite - Tensorflow lite models - Supports Coral EdgeTPU if hwAccel: true and appropriate model is used
  • tensorflow - Tensorflow

EdgeTPU models can be downloaded from here: https://coral.ai/models/ (Use the Object Detection Models)

Examples - Clients

See the examples directory for sample clients

Docker

To run the container in docker you need to map port 8080. If you want to update the models, you need to map model files and a config to use them. docker run -it -p 8080:8080 snowzach/doods:latest

There is a script called fetch_models.sh that you can download and run to create a models directory and download several models and outputs an example.yaml config file. You could then run: docker run -it -v ./models:/opt/doods/models -v ./example.yaml:/opt/doods/config.yaml -p 8080:8080 snowzach/doods:latest

Coral EdgeTPU

If you want to run it in docker using the Coral EdgeTPU, you need to pass the device to the container with: --device /dev/bus/usb Example: docker run -it --device /dev/bus/usb -p 8080:8080 snowzach/doods:latest

Misc

Special thanks to https://github.com/mattn/go-tflite as I would have never been able to figure out all the CGO stuff. I really wanted to write this in Go but I'm not good enough at C++/CGO to do it. Most of the tflite code is taken from that repo and customized for this tool.

And special thanks to @lhelontra, @marianopeck and @PINTO0309 for help in building tensorflow and binaries for bazel on the arm.

Docker Images

There are several published Docker images that you can use

  • latest - This is a multi-arch image that points to the arm32 image, arm64 and noavx image
  • noavx - 64 bit x86 image that should be a highly compatible with any cpu.
  • arm64 - Arm 64 bit image
  • arm32 - Arm 32 bit/arm7 image optimized for the Raspberry Pi
  • amd64 - 64 bit x86 image with all the fancy cpu features like avx and sse4.2
  • cuda - Support for NVidia GPUs

CUDA Support

There is now NVidia GPU support with an docker image tagged cuda, to run: docker run -it --gpus all -p 8080:8080 snowzach/doods:cuda For whatever reason, it can take a good 60-80 seconds before the model finishes loading.

Compiling

You can compile it yourself using the plain Dockerfile which should pick the optimal CPU flags for your architecture. Make the snowzach/doods:local image with this command:

$ make libedgetpu
$ make docker

You only need to make libedgetpu once, it will download and compile it for all architectures. I hope to streamline that process into the main dockerfile at some point

paypal

doods's People

Contributors

deadended avatar john-arvid avatar snowzach avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

doods's Issues

404 errors....

NOTE: This is in reference to a rebuild compile for a Synology NAS....

I managed to get the rebuild branch to compile all the way through. Now, when I try to use the tensorflow models (which I got from the script) using the config (from the script), I get this error...

INFO server/server.go:138 HTTP Request {"status": 404, "took": 0.018610838, "request": "/detect", "method": "POST", "package": "server.request", "request-id": "DS418Play/InkKHkGhVK-000008", "remote": "192.168.0.3:46276"}

The config file is:

doods:
detectors:
- name: tensorflow
type: tensorflow
modelFile: models/faster_rcnn_inception_v2_coco_2018_01_28.pb
labelFile: models/coco_labels1.txt
width: 224
height: 224
numThreads: 0
numConcurrent: 4

Still works like a champ with the standard tflite detector... I'm SO close! Still get warnings about SSE4.1 and SSE4.2 as well, but that seems to be common as I see other people commenting on it.

Update - Did a new build overnight, just in case the interrupted build was the problem. Used the 'no-cache' flag to force it to build clean. No difference. The tflite detector functions fine but the tensorflow detector throws 404's everywhere. It's a step in the right direction though. At least it doesn't "blow up" like it did before.

Figured out you have to leave all of the lines in the config.yaml file. You can't comment out the stuff you're not using. This one is on me.

Support for Older nVidia GPU?

Currently I have laptop with these specs:

  • Lenovo G400s
  • 8GB Ram
  • i3 3110-M
  • nVidia GT 720M

Software specs:

  • Host Debian buster
  • Installed nvidia container
  • nVidia driver using 390.xx version
  • Installed Cuda version: 9.1.85

When I run the docker image, I got this error

docker: Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "process_linux.go:449: container init caused \"process_linux.go:432: running prestart hook 0 caused \\\"error running hook: exit status 1, stdout: , stderr: nvidia-container-cli: requirement error: unsatisfied condition: cuda>=10.2, please update your driver to a newer version, or use an earlier cuda container\\\\n\\\"\"": unknown.
ERRO[0003] error waiting for container: context canceled

It stated that I need to install newer driver and Cuda version, which I can't because the gpu is not supported anymore on newer driver version.

For that, I just wanna ask, do you have any plan to support older nVidia GPU?

Question: Can you make it detect specific people?

I was wondering if you could have it compare the image to a small set of "known" people for recognition. For instance if I have an image of me and each of my family members, if it sees a person detection and it matches 90% correct then it tests those images agains the known images to see if any match, then report back? Would make it so that we could trigger automations based upon specific recognitions

Support getting image from a file

Loving doods so far.

My goal is to have ~10 cameras each doing an image detection every ~1 second (some cameras looking for wildlife, others cars, others people). I have plenty of CPU cycles sitting unused, so I spun up 10 doods containers on my K8s cluster. I used the Home Assistant integration and set scan_interval: 2 (two seconds). Everything works well enough, but I'm wondering if I can further optimize by making the detectors access files directly — right now, I appear to be using ~500 Kbps in constant bandwidth.

Screen Shot 2020-05-01 at 9 28 12 AM

I'm already running MotionEye upstream of doods, so it's easy for me to create a still .jpg image every 1 second for doods to consume. And I know doods can simply cat a local file, which could be accessible to it through a shared mounted volume with MotionEye. My question has more to do with the Home Assistant side. Is there an easy way to switch over the detector integration to use this approach? Or am I looking at what amounts to a rewrite of the custom component?

coral edge TPU issues on RPI4

Howdy. I'm not sure this is explicitly a thing with doods, but thought you might have some insight into what might be going on with doods and Coral. I'm running Coral attached to a RPI4B that has USB boot enabled and uses an external USB disk as its system drive (no SD card at all). The OS is debian/pios lite running arm64.

I'm trying to transition from CPU-based detection to the Coral tf lite models. To do so I used the script to download all the models then, with the following config:

doods:
  detectors:
    - name: default
      type: tflite
      modelFile: models/coco_ssd_mobilenet_v1_1.0_quant.tflite
      labelFile: models/coco_labels0.txt
      numThreads: 4
      numConcurrent: 4
      hwAccel: false
      timeout: 2m
    - name: edgetpu
      type: tflite
      modelFile: models/mobilenet_ssd_v2_coco_quant_postprocess_edgetpu.tflite
      labelFile: models/coco_labels0.txt
      numThreads: 0
      numConcurrent: 4
      hwAccel: true
    - name: tensorflow
      type: tensorflow
      modelFile: models/faster_rcnn_inception_v2_coco_2018_01_28.pb
      labelFile: models/coco_labels1.txt
      numThreads: 4
      numConcurrent: 4
      hwAccel: false
      timeout: 2m

... am running: docker run -it --device /dev/bus/usb -e logger.level='debug' -v /home/pi/models/:/opt/doods/models -v /home/pi/asdf.config:/opt/doods/config.yaml -p 8080:8080 snowzach/doods:latest. The first time through it errors out, stating it can't initialize the edgetpu.

2020-11-23T06:40:45.233Z	INFO	detector/detector.go:79	Configured Detector	{"package": "detector", "name": "default", "type": "tflite", "model": "models/coco_ssd_mobilenet_v1_1.0_quant.tflite", "labels": 80, "width": 300, "height": 300}
2020-11-23T06:41:16.772Z	ERROR	detector/detector.go:74	Could not initialize detector edgetpu: could not initialize edgetpu /sys/bus/usb/devices/2-2	{"package": "detector"}
2020-11-23T06:41:17.357Z	ERROR	detector/detector.go:74	Could not initialize detector tensorflow: Could not import model: Node 'SecondStageFeatureExtractor/InceptionV2/Mixed_5c/Branch_0/Conv2d_0a_1x1/BatchNorm/FusedBatchNorm': Unknown input node 'SecondStageFeatureExtractor/InceptionV2/Mixed_5c/Branch_0/Conv2d_0a_1x1/Conv2D_bn_offset'	{"package": "detector"}
2020-11-23T06:41:17.359Z	INFO	server/server.go:284	API Listening	{"package": "server", "address": ":8080", "tls": false, "version": "v0.2.5-0-gbf6d7a1-dirty"}
^C2020-11-23T06:42:28.809Z	INFO	conf/signal.go:45	Received Interrupt...

Prior to this run the lsusb output returns:

pi@rpi-4b-3:~ $ lsusb
Bus 002 Device 003: ID 1a6e:089a Global Unichip Corp. 
Bus 002 Device 002: ID 174c:55aa ASMedia Technology Inc. Name: ASM1051E SATA 6Gb/s bridge, ASM1053E SATA 6Gb/s bridge, ASM1153 SATA 3Gb/s bridge, ASM1153E SATA 6Gb/s bridge
Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 001 Device 002: ID 2109:3431 VIA Labs, Inc. Hub
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

...after the run, though, the Bus 002 Device 003 changes to:

pi@rpi-4b-3:~ $ lsusb
Bus 002 Device 004: ID 18d1:9302 Google Inc. 
Bus 002 Device 002: ID 174c:55aa ASMedia Technology Inc. Name: ASM1051E SATA 6Gb/s bridge, ASM1053E SATA 6Gb/s bridge, ASM1153 SATA 3Gb/s bridge, ASM1153E SATA 6Gb/s bridge
Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 001 Device 002: ID 2109:3431 VIA Labs, Inc. Hub
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

A second pass at running the container yields the following:

pi@rpi-4b-3:~ $ docker run -it --device /dev/bus/usb -e logger.level='debug' -v /home/pi/models/:/opt/doods/models -v /home/pi/asdf.config:/opt/doods/config.yaml -p 8080:8080 snowzach/doods:latest
2020-11-23T07:11:18.487Z	INFO	detector/detector.go:79	Configured Detector	{"package": "detector", "name": "default", "type": "tflite", "model": "models/coco_ssd_mobilenet_v1_1.0_quant.tflite", "labels": 80, "width": 300, "height": 300}
2020-11-23T07:11:21.200Z	INFO	detector/detector.go:79	Configured Detector	{"package": "detector", "name": "edgetpu", "type": "tflite-edgetpu", "model": "models/mobilenet_ssd_v2_coco_quant_postprocess_edgetpu.tflite", "labels": 80, "width": 300, "height": 300}
2020-11-23T07:11:21.673Z	ERROR	detector/detector.go:74	Could not initialize detector tensorflow: Could not import model: Node 'SecondStageFeatureExtractor/InceptionV2/Mixed_5c/Branch_0/Conv2d_0a_1x1/BatchNorm/FusedBatchNorm': Unknown input node 'SecondStageFeatureExtractor/InceptionV2/Mixed_5c/Branch_0/Conv2d_0a_1x1/Conv2D_bn_offset'	{"package": "detector"}
2020-11-23T07:11:21.674Z	INFO	server/server.go:284	API Listening	{"package": "server", "address": ":8080", "tls": false, "version": "v0.2.5-0-gbf6d7a1-dirty"}

and will remain up and online for about another 8-10 seconds before the host OS crashes. It crashes because the USB host controller seems to die and hang. Specifically:

[28882.276448] xhci_hcd 0000:01:00.0: xHCI host not responding to stop endpoint command.
[28882.292499] xhci_hcd 0000:01:00.0: Host halt failed, -110
[28882.292508] xhci_hcd 0000:01:00.0: xHCI host controller not responding, assume dead

and:

[28882.293319] xhci_hcd 0000:01:00.0: HC died; cleaning up
[28882.293860] usb 1-1: USB disconnect, device number 2
[28882.296644] usb 2-1: USB disconnect, device number 2

Full log:

[28866.415880] docker0: port 1(vethec1d83d) entered disabled state
[28866.416026] vethda2c989: renamed from eth0
[28866.482708] docker0: port 1(vethec1d83d) entered disabled state
[28866.495637] device vethec1d83d left promiscuous mode
[28866.495648] docker0: port 1(vethec1d83d) entered disabled state
[28867.639223] docker0: port 1(vethd25a02b) entered blocking state
[28867.639232] docker0: port 1(vethd25a02b) entered disabled state
[28867.639371] device vethd25a02b entered promiscuous mode
[28867.639552] docker0: port 1(vethd25a02b) entered blocking state
[28867.639558] docker0: port 1(vethd25a02b) entered forwarding state
[28867.639850] docker0: port 1(vethd25a02b) entered disabled state
[28867.707442] IPv6: ADDRCONF(NETDEV_CHANGE): vethd25a02b: link becomes ready
[28867.707518] docker0: port 1(vethd25a02b) entered blocking state
[28867.707524] docker0: port 1(vethd25a02b) entered forwarding state
[28868.148606] docker0: port 1(vethd25a02b) entered disabled state
[28868.148983] eth0: renamed from veth65474e7
[28868.165092] docker0: port 1(vethd25a02b) entered blocking state
[28868.165102] docker0: port 1(vethd25a02b) entered forwarding state
[28869.984866] usb 2-2: reset SuperSpeed Gen 1 USB device number 4 using xhci_hcd
[28870.004932] usb 2-2: LPM exit latency is zeroed, disabling LPM.
[28882.276448] xhci_hcd 0000:01:00.0: xHCI host not responding to stop endpoint command.
[28882.292499] xhci_hcd 0000:01:00.0: Host halt failed, -110
[28882.292508] xhci_hcd 0000:01:00.0: xHCI host controller not responding, assume dead
[28882.292804] usb 2-1: cmd cmplt err -108
[28882.292817] usb 2-1: cmd cmplt err -108
[28882.292829] usb 2-1: cmd cmplt err -108
[28882.292840] usb 2-1: cmd cmplt err -108
[28882.292851] usb 2-1: cmd cmplt err -108
[28882.292863] usb 2-1: cmd cmplt err -108
[28882.292873] usb 2-1: cmd cmplt err -108
[28882.292884] usb 2-1: cmd cmplt err -108
[28882.292894] usb 2-1: cmd cmplt err -108
[28882.292905] usb 2-1: cmd cmplt err -108
[28882.292924] usb 2-1: cmd cmplt err -108
[28882.292935] usb 2-1: cmd cmplt err -108
[28882.292945] usb 2-1: cmd cmplt err -108
[28882.292955] usb 2-1: cmd cmplt err -108
[28882.292965] usb 2-1: cmd cmplt err -108
[28882.292975] usb 2-1: cmd cmplt err -108
[28882.292988] usb 2-1: cmd cmplt err -108
[28882.292998] usb 2-1: cmd cmplt err -108
[28882.293008] usb 2-1: cmd cmplt err -108
[28882.293019] usb 2-1: cmd cmplt err -108
[28882.293030] usb 2-1: cmd cmplt err -108
[28882.293042] usb 2-1: cmd cmplt err -108
[28882.293053] usb 2-1: cmd cmplt err -108
[28882.293063] usb 2-1: cmd cmplt err -108
[28882.293074] usb 2-1: cmd cmplt err -108
[28882.293084] usb 2-1: cmd cmplt err -108
[28882.293097] usb 2-1: cmd cmplt err -108
[28882.293107] usb 2-1: cmd cmplt err -108
[28882.293116] usb 2-1: cmd cmplt err -108
[28882.293319] xhci_hcd 0000:01:00.0: HC died; cleaning up
[28882.293860] usb 1-1: USB disconnect, device number 2
[28882.296644] usb 2-1: USB disconnect, device number 2
[28882.297038] sd 0:0:0:0: [sda] tag#6 uas_zap_pending 0 uas-tag 1 inflight: CMD 
[28882.297054] sd 0:0:0:0: [sda] tag#6 CDB: opcode=0x2a 2a 00 00 65 80 d8 00 00 08 00
[28882.297081] sd 0:0:0:0: [sda] tag#7 uas_zap_pending 0 uas-tag 2 inflight: CMD 
[28882.297093] sd 0:0:0:0: [sda] tag#7 CDB: opcode=0x2a 2a 00 00 65 80 e8 00 00 10 00
[28882.297106] sd 0:0:0:0: [sda] tag#4 uas_zap_pending 0 uas-tag 3 inflight: CMD 
[28882.297117] sd 0:0:0:0: [sda] tag#4 CDB: opcode=0x2a 2a 00 00 65 81 38 00 00 30 00
[28882.297130] sd 0:0:0:0: [sda] tag#3 uas_zap_pending 0 uas-tag 4 inflight: CMD 
[28882.297141] sd 0:0:0:0: [sda] tag#3 CDB: opcode=0x2a 2a 00 02 09 34 28 00 00 10 00
[28882.297156] sd 0:0:0:0: [sda] tag#0 uas_zap_pending 0 uas-tag 5 inflight: CMD 
[28882.297166] sd 0:0:0:0: [sda] tag#0 CDB: opcode=0x2a 2a 00 02 13 6c 70 00 00 08 00
[28882.297179] sd 0:0:0:0: [sda] tag#1 uas_zap_pending 0 uas-tag 6 inflight: CMD 
[28882.297189] sd 0:0:0:0: [sda] tag#1 CDB: opcode=0x2a 2a 00 02 48 20 00 00 00 08 00
[28882.297195] sd 0:0:0:0: [sda] tag#3 UNKNOWN(0x2003) Result: hostbyte=0x01 driverbyte=0x00
[28882.297209] sd 0:0:0:0: [sda] tag#2 uas_zap_pending 0 uas-tag 7 inflight: CMD 
[28882.297218] sd 0:0:0:0: [sda] tag#3 CDB: opcode=0x2a 2a 00 02 09 34 28 00 00 10 00
[28882.297220] sd 0:0:0:0: [sda] tag#2 CDB: opcode=0x2a 2a 00 02 48 20 80 00 00 08 00
[28882.297233] sd 0:0:0:0: [sda] tag#5 uas_zap_pending 0 uas-tag 8 inflight: CMD 
[28882.297248] blk_update_request: I/O error, dev sda, sector 34157608 op 0x1:(WRITE) flags 0x103000 phys_seg 2 prio class 0
[28882.297251] sd 0:0:0:0: [sda] tag#5 CDB: opcode=0x2a 2a 00 02 48 21 28 00 00 08 00
[28882.297263] sd 0:0:0:0: [sda] tag#8 uas_zap_pending 0 uas-tag 9 inflight: CMD 
[28882.297271] Buffer I/O error on dev sda2, logical block 4203141, lost async page write
[28882.297281] sd 0:0:0:0: [sda] tag#8 CDB: opcode=0x2a 2a 00 02 48 21 40 00 00 08 00
[28882.297295] sd 0:0:0:0: [sda] tag#9 uas_zap_pending 0 uas-tag 10 inflight: CMD 
[28882.297304] sd 0:0:0:0: [sda] tag#9 CDB: opcode=0x2a 2a 00 00 e1 12 80 00 00 18 00
[28882.297318] sd 0:0:0:0: [sda] tag#10 uas_zap_pending 0 uas-tag 11 inflight: CMD 
[28882.297323] Buffer I/O error on dev sda2, logical block 4203142, lost async page write
[28882.297332] sd 0:0:0:0: [sda] tag#10 CDB: opcode=0x2a 2a 00 00 e1 51 30 00 00 18 00
[28882.297347] sd 0:0:0:0: [sda] tag#11 uas_zap_pending 0 uas-tag 12 inflight: CMD 
[28882.297357] sd 0:0:0:0: [sda] tag#11 CDB: opcode=0x2a 2a 00 00 e0 9f f8 00 00 08 00
[28882.297370] sd 0:0:0:0: [sda] tag#12 uas_zap_pending 0 uas-tag 13 inflight: CMD 
[28882.297380] sd 0:0:0:0: [sda] tag#12 CDB: opcode=0x2a 2a 00 00 e0 c4 f8 00 00 08 00
[28882.297393] sd 0:0:0:0: [sda] tag#13 uas_zap_pending 0 uas-tag 14 inflight: CMD 
[28882.297403] sd 0:0:0:0: [sda] tag#13 CDB: opcode=0x2a 2a 00 00 11 21 68 00 00 08 00
[28882.297416] sd 0:0:0:0: [sda] tag#14 uas_zap_pending 0 uas-tag 15 inflight: CMD 
[28882.297425] sd 0:0:0:0: [sda] tag#14 CDB: opcode=0x2a 2a 00 00 88 20 08 00 00 08 00
[28882.297438] sd 0:0:0:0: [sda] tag#15 uas_zap_pending 0 uas-tag 16 inflight: CMD 
[28882.297448] sd 0:0:0:0: [sda] tag#15 CDB: opcode=0x2a 2a 00 00 08 20 00 00 00 08 00
[28882.297461] sd 0:0:0:0: [sda] tag#16 uas_zap_pending 0 uas-tag 17 inflight: CMD 
[28882.297470] sd 0:0:0:0: [sda] tag#16 CDB: opcode=0x2a 2a 00 00 08 2e c8 00 00 10 00
[28882.297483] sd 0:0:0:0: [sda] tag#17 uas_zap_pending 0 uas-tag 18 inflight: CMD 
[28882.297493] sd 0:0:0:0: [sda] tag#17 CDB: opcode=0x2a 2a 00 00 08 a3 10 00 00 08 00
[28882.297506] sd 0:0:0:0: [sda] tag#18 uas_zap_pending 0 uas-tag 19 inflight: CMD 
[28882.297515] sd 0:0:0:0: [sda] tag#18 CDB: opcode=0x2a 2a 00 00 08 f3 18 00 00 08 00
[28882.297528] sd 0:0:0:0: [sda] tag#19 uas_zap_pending 0 uas-tag 20 inflight: CMD 
[28882.297537] sd 0:0:0:0: [sda] tag#19 CDB: opcode=0x2a 2a 00 00 88 25 a0 00 00 08 00
[28882.297549] sd 0:0:0:0: [sda] tag#20 uas_zap_pending 0 uas-tag 21 inflight: CMD 
[28882.297559] sd 0:0:0:0: [sda] tag#20 CDB: opcode=0x2a 2a 00 00 88 25 b0 00 00 08 00
[28882.297571] sd 0:0:0:0: [sda] tag#21 uas_zap_pending 0 uas-tag 22 inflight: CMD 
[28882.297581] sd 0:0:0:0: [sda] tag#21 CDB: opcode=0x2a 2a 00 00 88 29 88 00 00 08 00
[28882.297593] sd 0:0:0:0: [sda] tag#22 uas_zap_pending 0 uas-tag 23 inflight: CMD 
[28882.297603] sd 0:0:0:0: [sda] tag#22 CDB: opcode=0x2a 2a 00 00 89 35 70 00 00 08 00
[28882.297615] sd 0:0:0:0: [sda] tag#23 uas_zap_pending 0 uas-tag 24 inflight: CMD 
[28882.297625] sd 0:0:0:0: [sda] tag#23 CDB: opcode=0x2a 2a 00 01 48 20 10 00 00 08 00
[28882.297638] sd 0:0:0:0: [sda] tag#24 uas_zap_pending 0 uas-tag 25 inflight: CMD 
[28882.297648] sd 0:0:0:0: [sda] tag#24 CDB: opcode=0x2a 2a 00 01 c8 2f a8 00 00 08 00
[28882.297661] sd 0:0:0:0: [sda] tag#25 uas_zap_pending 0 uas-tag 26 inflight: CMD 
[28882.297670] sd 0:0:0:0: [sda] tag#25 CDB: opcode=0x2a 2a 00 01 c9 34 f0 00 00 08 00
[28882.297682] sd 0:0:0:0: [sda] tag#26 uas_zap_pending 0 uas-tag 27 inflight: CMD 
[28882.297692] sd 0:0:0:0: [sda] tag#26 CDB: opcode=0x2a 2a 00 02 08 20 00 00 00 08 00
[28882.297704] sd 0:0:0:0: [sda] tag#27 uas_zap_pending 0 uas-tag 28 inflight: CMD 
[28882.297714] sd 0:0:0:0: [sda] tag#27 CDB: opcode=0x2a 2a 00 02 08 20 80 00 00 08 00
[28882.297726] sd 0:0:0:0: [sda] tag#28 uas_zap_pending 0 uas-tag 29 inflight: CMD 
[28882.297736] sd 0:0:0:0: [sda] tag#28 CDB: opcode=0x2a 2a 00 02 08 29 50 00 00 08 00
[28882.297809] sd 0:0:0:0: [sda] tag#0 UNKNOWN(0x2003) Result: hostbyte=0x01 driverbyte=0x00
[28882.297828] sd 0:0:0:0: [sda] tag#0 CDB: opcode=0x2a 2a 00 02 13 6c 70 00 00 08 00
[28882.297844] blk_update_request: I/O error, dev sda, sector 34827376 op 0x1:(WRITE) flags 0x0 phys_seg 1 prio class 0
[28882.297863] EXT4-fs warning (device sda2): ext4_end_bio:315: I/O error 10 writing to inode 1044678 (offset 0 size 4096 starting block 4353423)
[28882.297880] Buffer I/O error on device sda2, logical block 4286862
[28882.297887] sd 0:0:0:0: [sda] tag#6 UNKNOWN(0x2003) Result: hostbyte=0x01 driverbyte=0x00
[28882.297903] sd 0:0:0:0: [sda] tag#6 CDB: opcode=0x2a 2a 00 00 65 80 d8 00 00 08 00
[28882.297919] blk_update_request: I/O error, dev sda, sector 6652120 op 0x1:(WRITE) flags 0x800 phys_seg 1 prio class 0
[28882.297932] sd 0:0:0:0: [sda] tag#1 UNKNOWN(0x2003) Result: hostbyte=0x01 driverbyte=0x00
[28882.297945] EXT4-fs warning (device sda2): ext4_end_bio:315: I/O error 10 writing to inode 260101 (offset 0 size 0 starting block 831516)
[28882.297948] sd 0:0:0:0: [sda] tag#1 CDB: opcode=0x2a 2a 00 02 48 20 00 00 00 08 00
[28882.297958] Buffer I/O error on device sda2, logical block 764955
[28882.297964] blk_update_request: I/O error, dev sda, sector 38281216 op 0x1:(WRITE) flags 0x103000 phys_seg 1 prio class 0
[28882.297977] Buffer I/O error on dev sda2, logical block 4718592, lost async page write
[28882.298019] sd 0:0:0:0: [sda] tag#2 UNKNOWN(0x2003) Result: hostbyte=0x01 driverbyte=0x00
[28882.298032] sd 0:0:0:0: [sda] tag#7 UNKNOWN(0x2003) Result: hostbyte=0x01 driverbyte=0x00
[28882.298035] sd 0:0:0:0: [sda] tag#2 CDB: opcode=0x2a 2a 00 02 48 20 80 00 00 08 00
[28882.298048] blk_update_request: I/O error, dev sda, sector 38281344 op 0x1:(WRITE) flags 0x103000 phys_seg 1 prio class 0
[28882.298051] sd 0:0:0:0: [sda] tag#7 CDB: opcode=0x2a 2a 00 00 65 80 e8 00 00 10 00
[28882.298061] Buffer I/O error on dev sda2, logical block 4718608, lost async page write
[28882.298067] blk_update_request: I/O error, dev sda, sector 6652136 op 0x1:(WRITE) flags 0x800 phys_seg 2 prio class 0
[28882.298083] EXT4-fs warning (device sda2): ext4_end_bio:315: I/O error 10 writing to inode 260101 (offset 0 size 0 starting block 831519)
[28882.298095] Buffer I/O error on device sda2, logical block 764957
[28882.298102] sd 0:0:0:0: [sda] tag#5 UNKNOWN(0x2003) Result: hostbyte=0x01 driverbyte=0x00
[28882.298109] Buffer I/O error on device sda2, logical block 764958
[28882.298116] sd 0:0:0:0: [sda] tag#5 CDB: opcode=0x2a 2a 00 02 48 21 28 00 00 08 00
[28882.298129] blk_update_request: I/O error, dev sda, sector 38281512 op 0x1:(WRITE) flags 0x103000 phys_seg 1 prio class 0
[28882.298141] Buffer I/O error on dev sda2, logical block 4718629, lost async page write
[28882.298145] sd 0:0:0:0: [sda] tag#4 UNKNOWN(0x2003) Result: hostbyte=0x01 driverbyte=0x00
[28882.298159] sd 0:0:0:0: [sda] tag#4 CDB: opcode=0x2a 2a 00 00 65 81 38 00 00 30 00
[28882.298173] blk_update_request: I/O error, dev sda, sector 6652216 op 0x1:(WRITE) flags 0x800 phys_seg 6 prio class 0
[28882.298179] sd 0:0:0:0: [sda] tag#8 UNKNOWN(0x2003) Result: hostbyte=0x01 driverbyte=0x00
[28882.298191] EXT4-fs warning (device sda2): ext4_end_bio:315: I/O error 10 writing to inode 260101 (offset 0 size 0 starting block 831533)
[28882.298194] sd 0:0:0:0: [sda] tag#8 CDB: opcode=0x2a 2a 00 02 48 21 40 00 00 08 00
[28882.298203] Buffer I/O error on device sda2, logical block 764967
[28882.298208] blk_update_request: I/O error, dev sda, sector 38281536 op 0x1:(WRITE) flags 0x103000 phys_seg 1 prio class 0
[28882.298220] Buffer I/O error on dev sda2, logical block 4718632, lost async page write
[28882.298223] Buffer I/O error on device sda2, logical block 764968
[28882.298235] Buffer I/O error on device sda2, logical block 764969
[28882.298248] Buffer I/O error on device sda2, logical block 764970
[28882.298257] sd 0:0:0:0: [sda] tag#9 UNKNOWN(0x2003) Result: hostbyte=0x01 driverbyte=0x00
[28882.298261] Buffer I/O error on device sda2, logical block 764971
[28882.298272] sd 0:0:0:0: [sda] tag#9 CDB: opcode=0x2a 2a 00 00 e1 12 80 00 00 18 00
[28882.298274] Buffer I/O error on device sda2, logical block 764972
[28882.298285] blk_update_request: I/O error, dev sda, sector 14750336 op 0x1:(WRITE) flags 0x0 phys_seg 3 prio class 0
[28882.298301] EXT4-fs warning (device sda2): ext4_end_bio:315: I/O error 10 writing to inode 702 (offset 0 size 0 starting block 1843793)
[28882.298329] EXT4-fs warning (device sda2): ext4_end_bio:315: I/O error 10 writing to inode 702 (offset 6623232 size 8192 starting block 1843795)
[28882.298335] Buffer I/O error on dev sda2, logical block 4203130, lost async page write
[28882.298360] Buffer I/O error on dev sda2, logical block 4203131, lost async page write
[28882.298382] EXT4-fs warning (device sda2): ext4_end_bio:315: I/O error 10 writing to inode 5480 (offset 0 size 0 starting block 1845799)
[28882.298385] Buffer I/O error on dev sda2, logical block 4203132, lost async page write
[28882.298407] EXT4-fs warning (device sda2): ext4_end_bio:315: I/O error 10 writing to inode 5480 (offset 6451200 size 8192 starting block 1845801)
[28882.298462] EXT4-fs warning (device sda2): ext4_end_bio:315: I/O error 10 writing to inode 5504 (offset 0 size 0 starting block 1840128)
[28882.298498] EXT4-fs warning (device sda2): ext4_end_bio:315: I/O error 10 writing to inode 5511 (offset 0 size 0 starting block 1841312)
[28882.298569] Buffer I/O error on dev sda2, logical block 1048577, lost async page write
[28882.307151] JBD2: Detected IO errors while flushing file data on sda2-8
[28882.307196] Aborting journal on device sda2-8.
[28882.307337] JBD2: Error -5 detected when updating journal superblock for sda2-8.
[28882.307442] JBD2: Detected IO errors while flushing file data on sda2-8
[28882.318324] EXT4-fs (sda2): previous I/O error to superblock detected
[28882.324787] EXT4-fs (sda2): I/O error while writing superblock
[28882.324801] EXT4-fs error (device sda2): ext4_journal_check_start:61: Detected aborted journal
[28882.335182] EXT4-fs (sda2): Remounting filesystem read-only
[28882.341249] EXT4-fs (sda2): I/O error while writing superblock
[28882.467747] sd 0:0:0:0: [sda] Synchronizing SCSI cache

This requires a hard reset of the PI4. Any idea what could be going on here?

I've tried the Coral device on the 2 other RPI4Bs that I have and it produces the same error/problem.

Implement way to healthcheck edgetpu detector to ensure it's still functioning

I would prefer to do this with a docker healthcheck.

The following command will run a basic detection of a 1 pixel image and return a status code:

echo "{\"detector_name\":\"edgetpu\", \"data\":\"Qk06AAAAAAAAADYAAAAoAAAAAQAAAAEAAAABABgAAAAAAAAAAADEDgAAxA4AAAAAAAAAAAAA/////w==\"}" | curl -d@- -f http://localhost:8080/detect

I need to change some of the return values inside of doods for this to work. Opening an issue to track.

Jetson Nano

It took me quite some time to add support for Jetson Nano, so I thought I would share my progress. This isn't an ideal/complete solution, maybe someone could build upon this or reuse this. Using this docker image detection time decreased from ~4 seconds to ~ 1 second using the faster_rcnn_inception_v2_coco_2018_01_28 model as the processing was offloaded to GPU.

  1. I was not able to build bazel on nvcr.io/nvidia/l4t-base image so I thought I just use pre-built binaries where Bazel is needed.
  2. It uses Nvidia pre-compiled binaries for Tensorflow and Tensorflow C library built by photoprism.org.
  3. It does not include the TensorflowLite C binary.
    3.1. For the doods to compile doods/detector/detector.go needs to be modified and references to Tensorflow Lite should be removed/commented before building the image. Tensorflow Lite is not really needed for Jetson Nano as we can just use Tensorflow so I did not bother adding Tensorflow Lite support.
  4. It took about 5 hours to build the image.
  5. I had to install Cuda 10.0 libraries although the latest Jetpack ships with Cuda 10.2. TensorFlow C library was compiled with Cuda 10.0 so we need to downgrade.
  6. I have attached the system libraries as volumes for the Docker to see. I am not sure if this is the right way to do it, e.g.:
      volumes:
       - /usr/local/cuda-10.0/targets/aarch64-linux/lib:/usr/local/cuda-10.0/lib64
       - /usr/lib/aarch64-linux-gnu:/usr/lib/aarch64-linux-gnu
    
  7. Nvidia runtime should be set as default

Here is the Dockerfile to build the image.

# 32.3.1 is the last version that includes Cuda 10.0
FROM nvcr.io/nvidia/l4t-base:r32.3.1

RUN apt-get update -y
RUN DEBIAN_FRONTEND=noninteractive apt-get install libhdf5-serial-dev hdf5-tools libhdf5-dev zlib1g-dev zip libjpeg8-dev liblapack-dev libblas-dev gfortran -y
RUN apt-get install python3-pip -y
RUN pip3 install -U pip testresources setuptools
RUN DEBIAN_FRONTEND=noninteractive apt-get install python3 python-dev python3-dev build-essential libssl-dev libffi-dev libxml2-dev libxslt1-dev zlib1g-dev -yq

RUN pip3 install -U numpy==1.16.1 future==0.17.1 mock==3.0.5 h5py==2.9.0 keras_preprocessing==1.0.5 keras_applications==1.0.8 gast==0.2.2 futures protobuf pybind11

RUN pip3 install --pre --extra-index-url https://developer.download.nvidia.com/compute/redist/jp/v44 'tensorflow<2'

# Install reqs with cross compile support
RUN dpkg --add-architecture arm64 && \
    apt-get update && apt-get install -y --no-install-recommends \
    pkg-config zip zlib1g-dev unzip wget bash-completion git curl \
    build-essential patch g++ python python-future python-numpy python-six python3 \
    cmake ca-certificates \
    libc6-dev:arm64 libstdc++6:arm64 libusb-1.0-0:arm64

# Install protoc
RUN wget https://github.com/protocolbuffers/protobuf/releases/download/v3.9.1/protoc-3.9.1-linux-x86_64.zip && \
    unzip protoc-3.9.1-linux-x86_64.zip -d /usr/local && \
    rm /usr/local/readme.txt && \
    rm protoc-3.9.1-linux-x86_64.zip

RUN apt-get update && apt-get install -y --no-install-recommends \
    pkg-config zip zlib1g-dev unzip wget bash-completion git curl \
    build-essential patch g++ python python-future python3 ca-certificates \
    libc6-dev libstdc++6 libusb-1.0-0 xz-utils

# Download and configure the build environment for gcc 6 which is needed to compile everything else
RUN mkdir -p /tmp/sysroot/lib && mkdir -p /tmp/sysroot/usr/lib && \
    cd /tmp && \
    wget --no-check-certificate https://releases.linaro.org/components/toolchain/binaries/6.3-2017.05/aarch64-linux-gnu/gcc-linaro-6.3.1-2017.05-x86_64_aarch64-linux-gnu.tar.xz -O /tmp/toolchain.tar.xz && \
    tar xf /tmp/toolchain.tar.xz && \
    rm toolchain.tar.xz && \
    cp -r /tmp/gcc-linaro-6.3.1-2017.05-x86_64_aarch64-linux-gnu/aarch64-linux-gnu/libc/* /tmp/sysroot/
RUN mkdir -p /tmp/debs && cd /tmp/debs && apt-get download libc6:arm64 libc6-dev:arm64 && \
    ar x libc6_*.deb && tar xvf data.tar.xz && \
    ar x libc6-dev*.deb && tar xvf data.tar.xz && \
    cp -R usr /tmp/sysroot && cp -R lib /tmp/sysroot && rm -Rf /tmp/debs && \
    mkdir -p /tmp/debs && cd /tmp/debs && \
    apt-get download libusb-1.0-0:arm64 libudev1:arm64 zlib1g-dev:arm64 zlib1g:arm64 && \
    ar x libusb-1.0*.deb && tar xvf data.tar.xz && \
    ar x libudev1*.deb && tar xvf data.tar.xz && \
    ar x zlib1g_*.deb && tar xvf data.tar.xz && \
    ar x zlib1g-dev*.deb && tar xvf data.tar.xz && rm usr/lib/aarch64-linux-gnu/libz.so && \
    cp -r lib/aarch64-linux-gnu/* /tmp/sysroot/lib && \
    cp -r usr/lib/aarch64-linux-gnu/* /tmp/sysroot/usr/lib && \
    cp -r usr/include/* /tmp/sysroot/usr/include && \
    ln -rs /tmp/sysroot/lib/libusb-1.0.so.0.1.0 /tmp/sysroot/lib/libusb-1.0.so && \
    ln -rs /tmp/sysroot/lib/libudev.so.1.6.13 /tmp/sysroot/lib/libudev.so && \
    ln -rs /tmp/sysroot/lib/libz.so.1.2.11 /tmp/sysroot/lib/libz.so && \
    ln -s /usr/local /tmp/sysroot/usr/local && \
    cd /tmp && rm -Rf /tmp/debs

ENV CC="/tmp/gcc-linaro-6.3.1-2017.05-x86_64_aarch64-linux-gnu/bin/aarch64-linux-gnu-gcc"
ENV CXX="/tmp/gcc-linaro-6.3.1-2017.05-x86_64_aarch64-linux-gnu/bin/aarch64-linux-gnu-g++"
ENV LDFLAGS="-v -L /lib -L /usr/lib --sysroot /tmp/sysroot"
ENV CFLAGS="-L /lib -L /usr/lib --sysroot /tmp/sysroot"
ENV CXXFLAGS="-L /lib -L /usr/lib --sysroot /tmp/sysroot"

# Install GOCV
ARG OPENCV_VERSION="4.1.2"
ENV OPENCV_VERSION $OPENCV_VERSION
RUN cd /tmp && \
    curl -Lo opencv.zip https://github.com/opencv/opencv/archive/${OPENCV_VERSION}.zip && \
    unzip -q opencv.zip && \
    curl -Lo opencv_contrib.zip https://github.com/opencv/opencv_contrib/archive/${OPENCV_VERSION}.zip && \
    unzip -q opencv_contrib.zip && \
    rm opencv.zip opencv_contrib.zip && \
    cd opencv-${OPENCV_VERSION} && \
    mkdir build && cd build && \
    cmake -D CMAKE_BUILD_TYPE=RELEASE \
    -D CMAKE_INSTALL_PREFIX=/usr/local \
    -D OPENCV_EXTRA_MODULES_PATH=../../opencv_contrib-${OPENCV_VERSION}/modules \
    -D CMAKE_TOOLCHAIN_FILE=/tmp/opencv-${OPENCV_VERSION}/platforms/linux/aarch64-gnu.toolchain.cmake \
    -D WITH_CUDA=ON \
    -D ENABLE_FAST_MATH=1 \
    -D CUDA_FAST_MATH=1 \
    -D WITH_CUBLAS=1 \
    -D WITH_JASPER=OFF \
    -D WITH_QT=OFF \
    -D WITH_GTK=OFF \
    -D WITH_IPP=OFF \
    -D BUILD_DOCS=OFF \
    -D BUILD_EXAMPLES=OFF \
    -D BUILD_IPP_IW=OFF \
    -D BUILD_TESTS=OFF \
    -D BUILD_PERF_TESTS=OFF \
    -D BUILD_opencv_java=NO \
    -D BUILD_opencv_python=NO \
    -D BUILD_opencv_python2=NO \
    -D BUILD_opencv_python3=NO \
    -D OPENCV_GENERATE_PKGCONFIG=ON .. && \
    make -j $(nproc --all) && \
    make preinstall && make install && \
    cd /tmp && rm -rf opencv*

# Configure the Go version to be used
ENV GO_ARCH "arm64"
ENV GOARCH=arm64

# Install Go
ENV GO_VERSION "1.14.2"
RUN curl -kLo go${GO_VERSION}.linux-${GO_ARCH}.tar.gz https://dl.google.com/go/go${GO_VERSION}.linux-${GO_ARCH}.tar.gz && \
    tar -C /usr/local -xzf go${GO_VERSION}.linux-${GO_ARCH}.tar.gz && \
    rm go${GO_VERSION}.linux-${GO_ARCH}.tar.gz


RUN apt-get update && apt-get install -y --no-install-recommends \
    pkg-config zip zlib1g-dev unzip wget bash-completion git curl \
    build-essential patch g++ python python-future python3 ca-certificates \
    libc6-dev libstdc++6 libusb-1.0-0

ENV GOOS=linux
ENV CGO_ENABLED=1
ENV PATH /usr/local/go/bin:/go/bin:${PATH}
ENV GOPATH /go

# Create the build directory
RUN mkdir /build
WORKDIR /build

ENV CC=aarch64-linux-gnu-gcc
ENV CXX=aarch64-linux-gnu-g++

ENV LD_LIBRARY_PATH "/usr/local/cuda-10.2/lib64:/usr/local/lib:${PATH}"
ENV PATH="/usr/local/cuda-10.2/bin:/usr/local/cuda/bin:${PATH}"

# Install pre-compiled Tensorflow Go C bindings
RUN mkdir /tmp/libtensorflow && cd /tmp/libtensorflow && \
    wget https://dl.photoprism.org/tensorflow/nvidia-jetson/libtensorflow-jetson-nano-1.15.2.tar.gz && \
    tar xvzf libtensorflow-jetson-nano-1.15.2.tar.gz && \
    cd lib && \
    cp libtensorflow_framework.so /usr/local/lib/libtensorflow_framework.so.1 && \
    cp libtensorflow_framework.so /usr/local/lib/libtensorflow_framework.so && \
    cp libtensorflow.so /usr/local/lib/libtensorflow.so && \
    rm -rf /tmp/libtensorflow

RUN ldconfig
ADD . .
RUN make
RUN ls -la  /usr/local/lib

RUN apt-get update && \
    apt-get install -y --no-install-recommends libusb-1.0 libc++-7-dev wget unzip ca-certificates libdc1394-22 libavcodec57 libavformat57 && \
    apt-get clean && \
    rm -rf /var/lib/apt/lists/*
RUN mkdir -p /opt/doods
WORKDIR /opt/doods
#COPY --from=builder /usr/local/lib/. /usr/local/lib/.
#COPY --from=builder /build/doods /opt/doods/doods
RUN cp -R /build/doods /opt/doods/doods
ADD config.yaml /opt/doods/config.yaml
RUN ldconfig

RUN mkdir models
RUN wget https://storage.googleapis.com/download.tensorflow.org/models/tflite/coco_ssd_mobilenet_v1_1.0_quant_2018_06_29.zip && unzip coco_ssd_mobilenet_v1_1.0_quant_2018_06_29.zip && rm coco_ssd_mobilenet_v1_1.0_quant_2018_06_29.zip && mv detect.tflite models/coco_ssd_mobilenet_v1_1.0_quant.tflite && rm labelmap.txt
RUN wget https://dl.google.com/coral/canned_models/coco_labels.txt && mv coco_labels.txt models/coco_labels0.txt

RUN ls -la  /usr/lib/aarch64-linux-gnu
ENV LD_LIBRARY_PATH "/usr/lib/aarch64-linux-gnu:/usr/local/cuda-10.0/lib64:/usr/local/cuda/lib64:/usr/local/lib:${LD_LIBRARY_PATH}"

CMD ["/opt/doods/doods", "-c", "/opt/doods/config.yaml", "api"]

# run with docker run -it --runtime=nvidia -v /opt/doods/models:/opt/doods/models -v /opt/doods/config.yaml:/opt/doods/config.yaml -v /usr/local/cuda-10.0/targets/aarch64-linux/lib:/usr/local/cuda-10.0/lib64 -v /usr/lib/aarch64-linux-gnu:/usr/lib/aarch64-linux-gnu  -p 8080:8080 helix3/doods:jetsonnano

To build it:

  • clone doods repo
  • comment out Tensorflow Lite references as indicated in 3.1
  • create Dockerfile.jetsonnano
  • run docker build -t MYUSERNAME/doods:jetsonnano -f Dockerfile.jetsonnano .

CPU supports instruction this binary was not compiled to use

I'm trying to install this on a Synology NAS. As long as I use the tflite engine, things are golden. Once I try to switch it to tensorflow, i get this message and then when something does attempt to request an image scan it takes forever and consumes tons of memory. Do I need to compile my own tensorflow binary or something?

REALLY slow performance with Arm H3

Hello, I started using doods with a H3 based board. As far as specs go, the H3 boards are comparable to a RasberryPi 3, but the performance was REALLY slow.

Pi 4
0.127 secs for 640*460
0.239241275 secs for 1500x1500

Pi 3+
0.22891855 -0.240357849 secs for for 640*460
0.452125477 - 0.432114728 secs for 1500x1500

ARM H3 NanoPi
3.538851496 secs for for 640*460
9.828188412 secs for 1500x1500

Overlapping/Redundant Identifications

Hello,

First of all, I love this tool. Very quick and easy to set up in Docker and connect to Home Assistant, with plenty of configuration options too.

I set up a security camera to send pictures of my driveway to DOODS for identification of "car". I am still trying to find a sweet spot for the threshold, but have noticed several times that there can be multiple overlapping identifications (more than one rectangle for the same car in the same single image). Is this behavior intended, and is there any potential or desire to add a setting to prevent redundant identifications?

Thanks!

Is the GPU being used?

I'm slightly confused by the log line:

Adding visible gpu devices: 0

When nvidia-smi is showing as the model being loaded (Albeit I don't think I have enough VRAM on this GFX, maybe that's the issue, maybe not)

Does this mean it's adding ZERO devices, or it's adding device 0?

2020-11-22 14:04:24.562677: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-11-22 14:04:24.562919: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1544] Found device 0 with properties:
pciBusID: 0000:01:00.0 name: Quadro P600 computeCapability: 6.1
coreClock: 1.5565GHz coreCount: 3 deviceMemorySize: 1.95GiB deviceMemoryBandwidth: 59.75GiB/s
2020-11-22 14:04:24.562938: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.2
2020-11-22 14:04:24.562948: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10
2020-11-22 14:04:24.562956: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10
2020-11-22 14:04:24.562963: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10
2020-11-22 14:04:24.562970: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10
2020-11-22 14:04:24.562977: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10
2020-11-22 14:04:24.562984: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2020-11-22 14:04:24.563023: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-11-22 14:04:24.563217: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-11-22 14:04:24.563382: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1686] Adding visible gpu devices: 0
2020-11-22 14:04:24.563401: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1085] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-11-22 14:04:24.563408: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1091]      0
2020-11-22 14:04:24.563427: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1104] 0:   N
2020-11-22 14:04:24.563496: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-11-22 14:04:24.563687: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-11-22 14:04:24.563868: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1230] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 1545 MB memory) -> physical GPU (device: 0, name: Quadro P600, pci bus id: 0000:01:00.0, compute capability: 6.1)
2020-11-22T14:04:24.774Z        INFO    tensorflow/tensorflow.go:259    Detection Complete      {"package": "detector.tensorflow", "name": "tensorflow", "id": "", "duration": 0.200685689, "detections": 1}
2020-11-22T14:04:24.774Z        INFO    server/server.go:138    HTTP Request    {"status": 200, "took": 0.224060097, "request": "/detect", "method": "POST", "package": "server.request", "request-id": "34f6cb495836/9zLnpxQJf5-001880", "remote": "172.200.0.1:51374"}
2020-11-22 14:04:24.797009: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-11-22 14:04:24.797278: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1544] Found device 0 with properties:
pciBusID: 0000:01:00.0 name: Quadro P600 computeCapability: 6.1
coreClock: 1.5565GHz coreCount: 3 deviceMemorySize: 1.95GiB deviceMemoryBandwidth: 59.75GiB/s
2020-11-22 14:04:24.797297: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.2
2020-11-22 14:04:24.797306: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10
2020-11-22 14:04:24.797314: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10
2020-11-22 14:04:24.797320: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10
2020-11-22 14:04:24.797326: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10
2020-11-22 14:04:24.797334: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10
2020-11-22 14:04:24.797340: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2020-11-22 14:04:24.797380: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-11-22 14:04:24.797575: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-11-22 14:04:24.797736: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1686] Adding visible gpu devices: 0
2020-11-22 14:04:24.797755: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1085] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-11-22 14:04:24.797762: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1091]      0
2020-11-22 14:04:24.797768: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1104] 0:   N
2020-11-22 14:04:24.797824: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-11-22 14:04:24.798014: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-11-22 14:04:24.798182: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1230] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 1545 MB memory) -> physical GPU (device: 0, name: Quadro P600, pci bus id: 0000:01:00.0, compute capability: 6.1)
2020-11-22T14:04:25.022Z        INFO    tensorflow/tensorflow.go:259    Detection Complete      {"package": "detector.tensorflow", "name": "tensorflow", "id": "", "duration": 0.215041021, "detections": 2}
2020-11-22T14:04:25.023Z        INFO    server/server.go:138    HTTP Request    {"status": 200, "took": 0.236342628, "request": "/detect", "method": "POST", "package": "server.request", "request-id": "34f6cb495836/9zLnpxQJf5-001881", "remote": "172.200.0.1:51378"}
2020-11-22 14:04:25.053201: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-11-22 14:04:25.053479: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1544] Found device 0 with properties:
pciBusID: 0000:01:00.0 name: Quadro P600 computeCapability: 6.1
coreClock: 1.5565GHz coreCount: 3 deviceMemorySize: 1.95GiB deviceMemoryBandwidth: 59.75GiB/s
2020-11-22 14:04:25.053499: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.2
2020-11-22 14:04:25.053508: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10
2020-11-22 14:04:25.053517: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10
2020-11-22 14:04:25.053524: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10
2020-11-22 14:04:25.053531: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10
2020-11-22 14:04:25.053537: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10
2020-11-22 14:04:25.053544: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2020-11-22 14:04:25.053587: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-11-22 14:04:25.053795: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-11-22 14:04:25.053973: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1686] Adding visible gpu devices: 0
2020-11-22 14:04:25.053993: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1085] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-11-22 14:04:25.054001: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1091]      0
2020-11-22 14:04:25.054008: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1104] 0:   N
2020-11-22 14:04:25.054072: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-11-22 14:04:25.054277: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-11-22 14:04:25.054457: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1230] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 1545 MB memory) -> physical GPU (device: 0, name: Quadro P600, pci bus id: 0000:01:00.0, compute capability: 6.1)
2020-11-22T14:04:25.248Z        INFO    tensorflow/tensorflow.go:259    Detection Complete      {"package": "detector.tensorflow", "name": "tensorflow", "id": "", "duration": 0.183255925, "detections": 1}
2020-11-22T14:04:25.248Z        INFO    server/server.go:138    HTTP Request    {"status": 200, "took": 0.208017169, "request": "/detect", "method": "POST", "package": "server.request", "request-id": "34f6cb495836/9zLnpxQJf5-001882", "remote": "172.200.0.1:51382"}

Cannot load custom classification models

Trying to load the "MobileNet V2 (ImageNet)" model from https://coral.ai/models/.

https://github.com/google-coral/edgetpu/raw/master/test_data/mobilenet_v2_1.0_224_quant_edgetpu.tflite
https://github.com/google-coral/edgetpu/raw/master/test_data/imagenet_labels.txt

config.yaml:

doods:
  detectors:
    - name: default
      type: tflite
      modelFile: models/coco_ssd_mobilenet_v1_1.0_quant.tflite
      labelFile: models/coco_labels0.txt
      numThreads: 0
      numConcurrent: 4 
    - name: tensorflow
      type: tensorflow
      modelFile: models/faster_rcnn_inception_v2_coco_2018_01_28.pb
      labelFile: models/coco_labels1.txt
      width: 224
      height: 224
      numThreads: 0
      numConcurrent: 4
    - name: mobilenet
      type: tflite
      modelFile: models/mobilenet_v2_1.0_224_quant_edgetpu.tflite
      labelFile: models/mobilenet_v2_labels.txt
      width: 224
      height: 224
      numThreads: 0
      numConcurrent: 4 

But I get an error at startup:

2020-08-06T19:31:00.348Z �[33mWARN�[0m tflite/detector.go:189 Error {"package": "detector.tflite", "name": "mobilenet", "message": "Encountered unresolved custom op: \u0010.", "user_data": null}

2020-08-06T19:31:00.348Z �[33mWARN�[0m tflite/detector.go:189 Error {"package": "detector.tflite", "name": "mobilenet", "message": "Node number -1798026136 (Node number %d (%s) %s.\n) failed to prepare.\n", "user_data": null}

2020-08-06T19:31:00.348Z �[31mERROR�[0m detector/detector.go:73 Could not initialize detector mobilenet: interpreter allocate failed {"package": "detector"}

2020-08-06T19:31:00.349Z �[34mINFO�[0m server/server.go:274 API Listening {"package": "server", "address": ":8080", "tls": false, "version": "v0.2.1-0-g91828a0-dirty"}

Jetson Nano Support

I have run the docker cuda variant but get errors infering that the aarch64 on the Nano is not supported.

Please could you confirm if in fact the DOODS will run on Nano using the inbuilt GPU?

Thnaks

Unable to build cuda image

I am trying to rebuild the cuda image so that the nvidia libraries in the docker image and the docker kernel versions on the host os to match.

When running to build the image with the following command:
docker build -f Dockerfile.base.cuda --pull .

I get an error during the compilation of one of the components:

# Compiling...
go build -ldflags "-X github.com/snowzach/doods/conf.Executable=doods -X github.com/snowzach/doods/conf.GitVersion=v0.2.1-0-g91828a0-dirty" -o doods
# github.com/snowzach/doods/detector/tflite/go-tflite/delegates/edgetpu
In file included from detector/tflite/go-tflite/delegates/edgetpu/edgetpu.go:5:0:
./edgetpu.go.h:8:10: fatal error: tensorflow/lite/c/c_api.h: No such file or directory
 #include <tensorflow/lite/c/c_api.h>
          ^~~~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
# github.com/snowzach/doods/detector/tflite/go-tflite
In file included from detector/tflite/go-tflite/tflite.go:5:0:
./tflite.go.h:8:10: fatal error: tensorflow/lite/c/c_api.h: No such file or directory
 #include <tensorflow/lite/c/c_api.h>
          ^~~~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
make: *** [doods] Error 2

CORS support

Would it be possible to add support for Cross-Origin Resource Sharing. This way you would be able to call the API directly from a website.

Docker image tagged 'cuda' does not work. Not listening for API calls

When I tried to use the cuda docker image, the api http does not respond. just says 'connection refused', the following command was used:

docker run --name="doods-gpu" --gpus all -i -t -d -v /home/user/doods-gpu/models:/opt/doods/models -v /home/user/doods-gpu/config.yaml:/opt/doods/config.yaml -p 8080:8080 snowzach/doods:cuda

I have successfully run the 'latest' docker image with the command:

docker run -i -t -d --device /dev/bus/usb -v /home/user/doods/models:/opt/doods/models -v /home/user/doods/config.yaml:/opt/doods/config.yaml -p 8080:8080 snowzach/doods:latest

I only have one container at time running.

Also, I was not able to build the CUDA image locally, I receive an error:

unable to prepare context: unable to evaluate symlinks in Dockerfile path: lstat /home/user/doods-build/Dockerfile: no such file or directory

The docker log output from the cuda container does not seem to launch the http listener, or something like that. The log from the 'latest' container show API listening events.

Thanks for your work on this and assistance!

docker log from cuda:

2020-11-03 00:11:59.860454: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.2

2020-11-03 00:12:00.247512: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with Intel(R) MKL-DNN to use the following CPU instructions in performance-critical operations:  SSE3 SSE4.1 SSE4.2 AVX

To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.

2020-11-03 00:12:00.257622: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1

2020-11-03 00:12:00.276125: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero

2020-11-03 00:12:00.276884: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1544] Found device 0 with properties: 

pciBusID: 0000:06:00.0 name: GeForce GTX 1070 computeCapability: 6.1

coreClock: 1.683GHz coreCount: 15 deviceMemorySize: 7.93GiB deviceMemoryBandwidth: 238.66GiB/s

2020-11-03 00:12:00.276903: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.2

2020-11-03 00:12:00.286894: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10

2020-11-03 00:12:00.288231: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10

2020-11-03 00:12:00.289245: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10

2020-11-03 00:12:00.292388: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10

2020-11-03 00:12:00.293156: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10

2020-11-03 00:12:00.298797: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7

2020-11-03 00:12:00.298929: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero

2020-11-03 00:12:00.299740: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero

2020-11-03 00:12:00.300415: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1686] Adding visible gpu devices: 0


docker log from latest:

,2020-11-02T23:24:34.695Z   INFO    detector/detector.go:79 Configured Detector {"package": "detector", "name": "tensorflow", "type": "tensorflow", "model": "models/faster_rcnn_inception_v2_coco_2018_01_28.pb", "labels": 65, "width": -1, "height": -1}
,2020-11-02T23:24:34.697Z   INFO    server/server.go:284    API Listening   {"package": "server", "address": ":8080", "tls": false, "version": "v0.2.5-0-gbf6d7a1-dirty"}
,2020-11-03T00:09:08.891Z   INFO    detector/detector.go:79 Configured Detector {"package": "detector", "name": "default", "type": "tflite-edgetpu", "model": "models/coco_ssd_mobilenet_v1_1.0_quant.tflite", "labels": 80, "width": 300, "height": 300}
,2020-11-03T00:09:08.908Z   INFO    detector/detector.go:79 Configured Detector {"package": "detector", "name": "edgetpu", "type": "tflite-edgetpu", "model": "models/mobilenet_ssd_v2_coco_quant_postprocess_edgetpu.tflite", "labels": 80, "width": 300, "height": 300}
,2020-11-03 00:09:09.227098: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with Intel(R) MKL-DNN to use the following CPU instructions in performance-critical operations:  SSE3 SSE4.1 SSE4.2 AVX
,To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
,2020-11-03T00:09:09.235Z   INFO    detector/detector.go:79 Configured Detector {"package": "detector", "name": "tensorflow", "type": "tensorflow", "model": "models/faster_rcnn_inception_v2_coco_2018_01_28.pb", "labels": 65, "width": -1, "height": -1}
,2020-11-03T00:09:09.235Z   INFO    server/server.go:284    API Listening   {"package": "server", "address": ":8080", "tls": false, "version": "v0.2.5-0-gbf6d7a1-dirty"}
,


CUDA build fail…

cp Dockerfile.base.cuda Dockerfile.
user@ubuntu-macmini2012:~/doods-build$ docker build --tag localdoods .
unable to prepare context: unable to evaluate symlinks in Dockerfile path: lstat /home/user/doods-build/Dockerfile: no such file or directory

Confused with timings / Slow HTTP requests

hi, I have setup doods on a raspberry pi 4 (4Gb ram version) in a docker container, and i have configured home assistant (which is on a different machine but on the same network) to point image_processing to doods. Everything works so far but i am kind of confused on some points.

Please see my start and detection log below:

2020-10-30T14:19:48.764Z	INFO	detector/detector.go:79	Configured Detector	{"package": "detector", "name": "default_edgetpu", "type": "tflite-edgetpu", "model": "models/mobilenet_ssd_v2_coco_quant_postprocess_edgetpu.tflite", "labels": 80, "width": 300, "height": 300}

2020-10-30T14:19:52.113Z	INFO	detector/detector.go:79	Configured Detector	{"package": "detector", "name": "tensorflow", "type": "tensorflow", "model": "models/faster_rcnn_inception_v2_coco_2018_01_28.pb", "labels": 80, "width": -1, "height": -1}

2020-10-30T14:19:52.117Z	INFO	server/server.go:284	API Listening	{"package": "server", "address": ":8080", "tls": false, "version": "v0.2.5-0-gbf6d7a1-dirty"}

2020-10-30T14:20:04.510Z	INFO	tflite/detector.go:393	Detection Complete	{"package": "detector.tflite", "name": "default_edgetpu", "id": "", "duration": 0.060275315, "detections": 20, "device": {"Type":1,"Path":"/sys/bus/usb/devices/2-1"}}

2020-10-30T14:20:04.515Z	INFO	server/server.go:139	HTTP Request	{"status": 200, "took": 11.233914828, "request": "/detect", "method": "POST", "package": "server.request", "request-id": "aa09eeaee375/0EhmoF4qWh-000001", "remote": "10.0.10.20:53816"}

2020-10-30T14:20:25.348Z	INFO	tflite/detector.go:393	Detection Complete	{"package": "detector.tflite", "name": "default_edgetpu", "id": "", "duration": 0.033383344, "detections": 20, "device": {"Type":1,"Path":"/sys/bus/usb/devices/2-1"}}

If I am correct, in the last line it says that the time it took to detect objects is "duration": 0.033383344, correct? but in the line before it says HTTP Request {"status": 200, "took": 11.233914828 - that is the time it took for the image to do a roundtrip between homeassistant to doods and back? am i correct? it doesn't look legit to me as almost 11 seconds is way to much, is there something that i am missing?

Thank you :)

Error when trying to use Coral USB edgetpu

Error:

$ docker run -it --name doods -p 6060:8080 -v /opt/doods/src/doods-0.2.1/models:/opt/doods/models -v /opt/doods/src/doods-0.2.1/example.yaml:/opt/doods/config.yaml -v /dev/bus/usb:/dev/bus/usb snowzach/doods:latest
2020-07-31T06:57:08.959Z	ERROR	detector/detector.go:73	Could not initialize detector default: no edgetpu devices detected	{"package": "detector"}
2020-07-31T06:57:08.960Z	FATAL	detector/detector.go:83	No detectors configured	{"package": "detector"}

My config:

doods:
  detectors:
    - name: default
      type: tflite
      modelFile: models/mobilenet_ssd_v2_coco_quant_postprocess_edgetpu.tflite
      labelFile: models/coco_labels0.txt
      numThreads: 0
      numConcurrent: 4
      hwAccel: true

I'm using a Proxmox host to passthrough my VM that hosts a bunch of docker containers. Which is another layer of potential issues, but I've been able to do usb passthrough with a aeotec z-wave usb controller.

This is what lsusb looks like on the VM

Bus 001 Device 007: ID 1a6e:089a Global Unichip Corp. 
Bus 001 Device 003: ID 0624:0249 Avocent Corp. Virtual Keyboard/Mouse
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 003 Device 002: ID 0658:0200 Sigma Designs, Inc. 
Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 002 Device 002: ID 0627:0001 Adomax Technology Co., Ltd 
Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub

Do you have any advice?

panic: runtime error: index out of range [0] with length 0

Encountering the following error when trying to use a model built via AutoML Vision, it works when used with classify_image.py in the https://github.com/google-coral/tflite repo and I have no problem using the default detectors. I've also tried using the amd64, noavx, & latest tags but no luck. Happy to provide any further detail. Thanks!

INFO    detector/detector.go:78 Configured Detector     {"package": "detector", "name": "raccoon", "type": "tflite-edgetpu", "model": "models/raccoon_edgetpu.tflite", "labels": 2, "width": 224, "height": 224}
INFO    server/server.go:273    API Listening   {"package": "server", "address": ":8080", "tls": false}
INFO    server/server.go:137    HTTP Request    {"status": 200, "took": 0.001004527, "request": "/detectors", "method": "GET", "package": "server.request", "request-id": "doods-849c95966b-f2fh6/5IgL5zyRy4-000001", "remote": "10.42.0.1:42076"}
INFO    server/server.go:137    HTTP Request    {"status": 200, "took": 0.000548719, "request": "/detectors", "method": "GET", "package": "server.request", "request-id": "doods-849c95966b-f2fh6/5IgL5zyRy4-000002", "remote": "10.42.0.1:42114"}
INFO    server/server.go:137    HTTP Request    {"status": 200, "took": 0.00054322, "request": "/detectors", "method": "GET", "package": "server.request", "request-id": "doods-849c95966b-f2fh6/5IgL5zyRy4-000003", "remote": "10.42.0.1:42170"}
panic: runtime error: index out of range [0] with length 0

goroutine 98 [running]:
github.com/snowzach/doods/detector/tflite.(*detector).Detect(0xc0000ce2c0, 0xddcec0, 0xc00033cc90, 0xc000033740, 0x0, 0x0, 0x0)
        /build/detector/tflite/detector.go:267 +0x1b85
github.com/snowzach/doods/detector.(*Mux).Detect(0xc00000f260, 0xddcec0, 0xc00033cc90, 0xc000033740, 0xc00000f260, 0xc35401, 0x7f8bac5b64c8)
        /build/detector/detector.go:120 +0xb7
github.com/snowzach/doods/odrpc._Odrpc_Detect_Handler.func1(0xddcec0, 0xc00033cc90, 0xc89ba0, 0xc000033740, 0x13, 0xddcec0, 0xc00033cc90, 0x0)
        /build/odrpc/rpc.pb.go:855 +0x86
github.com/grpc-ecosystem/go-grpc-middleware/auth.UnaryServerInterceptor.func1(0xddcec0, 0xc00033cc90, 0xc89ba0, 0xc000033740, 0xc00013e0e0, 0xc00013e100, 0xafa19a, 0xc354e0, 0xc00013e120, 0xc00013e0e0)
        /go/pkg/mod/github.com/grpc-ecosystem/[email protected]/auth/auth.go:47 +0x108
github.com/grpc-ecosystem/go-grpc-middleware.ChainUnaryServer.func1.1.1(0xddcec0, 0xc00033cc90, 0xc89ba0, 0xc000033740, 0xc0002ccc00, 0x0, 0xc000362a10, 0x414868)
        /go/pkg/mod/github.com/grpc-ecosystem/[email protected]/chain.go:25 +0x63
github.com/grpc-ecosystem/go-grpc-middleware.ChainUnaryServer.func1(0xddcec0, 0xc00033cc90, 0xc89ba0, 0xc000033740, 0xc00013e0e0, 0xc00013e100, 0xc000362a80, 0x50d72d, 0xc4c000, 0xc00033cc90)
        /go/pkg/mod/github.com/grpc-ecosystem/[email protected]/chain.go:34 +0xd5
github.com/snowzach/doods/odrpc._Odrpc_Detect_Handler(0xc14100, 0xc00000f260, 0xddcec0, 0xc00033cc90, 0xc00033a540, 0xc0001d4de0, 0xddcec0, 0xc00033cc90, 0xc0004ae000, 0x1db6d)
        /build/odrpc/rpc.pb.go:857 +0x14b
google.golang.org/grpc.(*Server).processUnaryRPC(0xc0001c0180, 0xde4100, 0xc0002ce510, 0xc0002ccc00, 0xc0001d51d0, 0x12b8458, 0x0, 0x0, 0x0)
        /go/pkg/mod/google.golang.org/[email protected]/server.go:1007 +0x460
google.golang.org/grpc.(*Server).handleStream(0xc0001c0180, 0xde4100, 0xc0002ce510, 0xc0002ccc00, 0x0)
        /go/pkg/mod/google.golang.org/[email protected]/server.go:1287 +0xd97
google.golang.org/grpc.(*Server).serveStreams.func1.1(0xc0001b8420, 0xc0001c0180, 0xde4100, 0xc0002ce510, 0xc0002ccc00)
        /go/pkg/mod/google.golang.org/[email protected]/server.go:722 +0xbb
created by google.golang.org/grpc.(*Server).serveStreams.func1
        /go/pkg/mod/google.golang.org/[email protected]/server.go:720 +0xa1```

Docker pull fails when user namespace mode enabled

I am running docker on ubuntu with user namespace mode (userns) enabled and get the following error when pulling from snowzach/doods:latest or snowzach/doods:noavx

failed to register layer: Error processing tar file(exit status 1): Container ID 345018 cannot be mapped to a host ID

This appears to be an issue from one of the containers running a really high uid/gid. i have a pretty standard namespace setting of uid/gid range of 0 - 65535. When doing the initial pull this is the third from last container.

Crashed while processing image

I'm not entirely sure what i did, but i'm trying to move my config from an old computer to a raspberry pi 4 with 8gb of ram. running the latest docker

2021-01-13 04:55:02.280993: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 6220800 exceeds 10% of free system memory.
2021-01-13 04:55:02.289249: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 6220800 exceeds 10% of free system memory.
2021-01-13 04:55:02.346752: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 6220800 exceeds 10% of free system memory.
2021-01-13 04:55:02.755312: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 196709 exceeds 10% of free system memory.
2021-01-13 04:55:02.764772: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 24883200 exceeds 10% of free system memory.
terminate called recursively
terminate called after throwing an instance of 'std::bad_alloc'
  what():  std::bad_alloc
SIGABRT: abort
PC=0xafb0c746 m=8 sigcode=4294967290

goroutine 0 [idle]:
runtime: unknown pc 0xafb0c746
stack: frame={sp:0x9bdfa300, fp:0x0} stack=[0x9b5ff08c,0x9bdfec8c)
9bdfa280:  9bdfa2f0  afbb72d5  00000009  b6fa1060 
9bdfa290:  b6fa0968  00000001  b1d17000  b6f9e7b0 
9bdfa2a0:  b1d17034  afc00008  00000041  00000000 
9bdfa2b0:  00000000  00000000  00000003  9bdff7a8 
9bdfa2c0:  00000008  1971fc00  9bdfa328  00000000 
9bdfa2d0:  9bdfa314  b6fa0968  9bdfa328  9bdfa524 
9bdfa2e0:  00000199  00000000  9bdfae40  afbb7f7f 
9bdfa2f0:  5316d248  afc17000  b205a3a9  9bdfa328 
9bdfa300: <00000006  afb1a0af  00000000  00000000 
9bdfa310:  00000000  00009809  5316d248  00000008 
9bdfa320:  9bdfa558  afc057d1  ffffffff  5316d248 
9bdfa330:  00000055  00000043  456634f0  456634e8 
9bdfa340:  0010e000 <internal/poll.(*FD).RawControl+236>  38e17008  4ea84e08  9bdfa7e8 
9bdfa350:  00000199  00000000  9bdfae40  9bdfa568 
9bdfa360:  9bdfa590  b205a3ab  b205a3ab  b6fa0968 
9bdfa370:  6ee06048  0000002c  0000002c  00000001 
runtime: unknown pc 0xafb0c746
stack: frame={sp:0x9bdfa300, fp:0x0} stack=[0x9b5ff08c,0x9bdfec8c)
9bdfa280:  9bdfa2f0  afbb72d5  00000009  b6fa1060 
9bdfa290:  b6fa0968  00000001  b1d17000  b6f9e7b0 
9bdfa2a0:  b1d17034  afc00008  00000041  00000000 
9bdfa2b0:  00000000  00000000  00000003  9bdff7a8 
9bdfa2c0:  00000008  1971fc00  9bdfa328  00000000 
9bdfa2d0:  9bdfa314  b6fa0968  9bdfa328  9bdfa524 
9bdfa2e0:  00000199  00000000  9bdfae40  afbb7f7f 
9bdfa2f0:  5316d248  afc17000  b205a3a9  9bdfa328 
9bdfa300: <00000006  afb1a0af  00000000  00000000 
9bdfa310:  00000000  00009809  5316d248  00000008 
9bdfa320:  9bdfa558  afc057d1  ffffffff  5316d248 
9bdfa330:  00000055  00000043  456634f0  456634e8 
9bdfa340:  0010e000 <internal/poll.(*FD).RawControl+236>  38e17008  4ea84e08  9bdfa7e8 
9bdfa350:  00000199  00000000  9bdfae40  9bdfa568 
9bdfa360:  9bdfa590  b205a3ab  b205a3ab  b6fa0968 
9bdfa370:  6ee06048  0000002c  0000002c  00000001 

goroutine 130 [syscall]:
runtime.cgocall(0x6e6b8c, 0x38468b0, 0x3810408)
	/usr/local/go/src/runtime/cgocall.go:133 +0x5c fp=0x3846898 sp=0x3846880 pc=0x43270
github.com/tensorflow/tensorflow/tensorflow/go._Cfunc_TF_SessionRun(0x8ed24cb8, 0x0, 0x38103f8, 0x3810400, 0x1, 0x7470540, 0x390e5e0, 0x4, 0x0, 0x0, ...)
	_cgo_gotypes.go:1452 +0x30 fp=0x38468ac sp=0x3846898 pc=0x67663c
github.com/tensorflow/tensorflow/tensorflow/go.(*Session).Run.func1(0x3902700, 0x397ab00, 0x3846ab4, 0x3846aec, 0x4, 0x4, 0x0, 0x0, 0x0, 0x3810408)
	/go/pkg/mod/github.com/tensorflow/[email protected]+incompatible/tensorflow/go/session.go:149 +0x210 fp=0x3846900 sp=0x38468ac pc=0x685158
github.com/tensorflow/tensorflow/tensorflow/go.(*Session).Run(0x3902700, 0x3846ab4, 0x3846aec, 0x4, 0x4, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	/go/pkg/mod/github.com/tensorflow/[email protected]+incompatible/tensorflow/go/session.go:149 +0x14c fp=0x3846948 sp=0x3846900 pc=0x67d0e8
github.com/snowzach/doods/detector/tensorflow.(*detector).Detect(0x388f3c0, 0x98dd98, 0x38938a0, 0x3860900, 0x0, 0x0, 0x0)
	/build/detector/tensorflow/tensorflow.go:187 +0xb18 fp=0x3846c88 sp=0x3846948 pc=0x689cbc
github.com/snowzach/doods/detector.(*Mux).Detect(0x3894510, 0x98dd98, 0x38938a0, 0x3860900, 0x3894510, 0x974c01, 0x9ddcc430)
	/build/detector/detector.go:130 +0xb4 fp=0x3846cc0 sp=0x3846c88 pc=0x69c090
github.com/snowzach/doods/odrpc._Odrpc_Detect_Handler.func1(0x98dd98, 0x38938a0, 0x83cd40, 0x3860900, 0x13, 0x98dd98, 0x38938a0, 0x0)
	/build/odrpc/rpc.pb.go:1050 +0x68 fp=0x3846ce0 sp=0x3846cc0 pc=0x652b60
github.com/grpc-ecosystem/go-grpc-middleware/auth.UnaryServerInterceptor.func1(0x98dd98, 0x38938a0, 0x83cd40, 0x3860900, 0x3894580, 0x3894590, 0x7de558, 0x38945a0, 0x0, 0x3894580)
	/go/pkg/mod/github.com/grpc-ecosystem/[email protected]/auth/auth.go:47 +0xd4 fp=0x3846d0c sp=0x3846ce0 pc=0x6d7758
github.com/grpc-ecosystem/go-grpc-middleware.ChainUnaryServer.func1.1.1(0x98dd98, 0x38938a0, 0x83cd40, 0x3860900, 0x3890810, 0x757e000, 0x383cb40, 0x647248)
	/go/pkg/mod/github.com/grpc-ecosystem/[email protected]/chain.go:25 +0x4c fp=0x3846d38 sp=0x3846d0c pc=0x6d6c80
github.com/grpc-ecosystem/go-grpc-middleware.ChainUnaryServer.func1(0x98dd98, 0x38938a0, 0x83cd40, 0x3860900, 0x3894580, 0x3894590, 0x612d04, 0x80caa8, 0x38938a0, 0x807b90)
	/go/pkg/mod/github.com/grpc-ecosystem/[email protected]/chain.go:34 +0xbc fp=0x3846d6c sp=0x3846d38 pc=0x6d6e10
github.com/snowzach/doods/odrpc._Odrpc_Detect_Handler(0x808a90, 0x3894510, 0x98dd98, 0x38938a0, 0x3890810, 0x380c2c0, 0x98dd98, 0x38938a0, 0x757e000, 0x128c6)
	/build/odrpc/rpc.pb.go:1052 +0x108 fp=0x3846da0 sp=0x3846d6c pc=0x6472ac
google.golang.org/grpc.(*Server).processUnaryRPC(0x392e000, 0x990b30, 0x39f2a50, 0x38a0a20, 0x380c5e0, 0xe50834, 0x0, 0x0, 0x0)
	/go/pkg/mod/google.golang.org/[email protected]/server.go:1210 +0x468 fp=0x3846f1c sp=0x3846da0 pc=0x612d4c
google.golang.org/grpc.(*Server).handleStream(0x392e000, 0x990b30, 0x39f2a50, 0x38a0a20, 0x0)
	/go/pkg/mod/google.golang.org/[email protected]/server.go:1533 +0xa88 fp=0x3846fb0 sp=0x3846f1c pc=0x61631c
google.golang.org/grpc.(*Server).serveStreams.func1.2(0x39facd4, 0x392e000, 0x990b30, 0x39f2a50, 0x38a0a20)
	/go/pkg/mod/google.golang.org/[email protected]/server.go:871 +0x84 fp=0x3846fd4 sp=0x3846fb0 pc=0x621424
runtime.goexit()
	/usr/local/go/src/runtime/asm_arm.s:857 +0x4 fp=0x3846fd4 sp=0x3846fd4 pc=0xabcd8
created by google.golang.org/grpc.(*Server).serveStreams.func1
	/go/pkg/mod/google.golang.org/[email protected]/server.go:869 +0x1b8

goroutine 1 [chan receive]:
github.com/snowzach/doods/cmd.glob..func1(0xe55cd8, 0x3894110, 0x0, 0x2)
	/build/cmd/api.go:46 +0x1ec
github.com/spf13/cobra.(*Command).execute(0xe55cd8, 0x3894100, 0x2, 0x2, 0xe55cd8, 0x3894100)
	/go/pkg/mod/github.com/spf13/[email protected]/command.go:854 +0x1e8
github.com/spf13/cobra.(*Command).ExecuteC(0xe55e28, 0x1, 0x458d0, 0x385e070)
	/go/pkg/mod/github.com/spf13/[email protected]/command.go:958 +0x26c
github.com/spf13/cobra.(*Command).Execute(...)
	/go/pkg/mod/github.com/spf13/[email protected]/command.go:895
github.com/snowzach/doods/cmd.Execute()
	/build/cmd/root.go:57 +0x20
main.main()
	/build/main.go:8 +0x14

goroutine 19 [syscall]:
os/signal.signal_recv(0x0)
	/usr/local/go/src/runtime/sigqueue.go:147 +0x130
os/signal.loop()
	/usr/local/go/src/os/signal/signal_unix.go:23 +0x14
created by os/signal.Notify.func1
	/usr/local/go/src/os/signal/signal.go:127 +0x34

goroutine 21 [chan receive]:
github.com/snowzach/doods/conf.init.1.func1()
	/build/conf/signal.go:42 +0x28
created by github.com/snowzach/doods/conf.init.1
	/build/conf/signal.go:40 +0xb8

goroutine 50 [select]:
google.golang.org/grpc.(*ccBalancerWrapper).watcher(0x386a540)
	/go/pkg/mod/google.golang.org/[email protected]/balancer_conn_wrappers.go:69 +0x90
created by google.golang.org/grpc.newCCBalancerWrapper
	/go/pkg/mod/google.golang.org/[email protected]/balancer_conn_wrappers.go:60 +0x108

goroutine 51 [chan receive]:
google.golang.org/grpc.(*addrConn).resetTransport(0x3928180)
	/go/pkg/mod/google.golang.org/[email protected]/clientconn.go:1188 +0x464
created by google.golang.org/grpc.(*addrConn).connect
	/go/pkg/mod/google.golang.org/[email protected]/clientconn.go:825 +0xa8

goroutine 52 [chan receive (nil chan)]:
github.com/snowzach/doods/server/rpc.RegisterVersionRPCHandlerFromEndpoint.func1.1(0x98dd58, 0x3918020, 0x383cd20, 0x382a3e0, 0x9)
	/build/server/rpc/version.pb.gw.go:56 +0x38
created by github.com/snowzach/doods/server/rpc.RegisterVersionRPCHandlerFromEndpoint.func1
	/build/server/rpc/version.pb.gw.go:55 +0x164

goroutine 53 [select]:
google.golang.org/grpc.(*ccBalancerWrapper).watcher(0x386a5d0)
	/go/pkg/mod/google.golang.org/[email protected]/balancer_conn_wrappers.go:69 +0x90
created by google.golang.org/grpc.newCCBalancerWrapper
	/go/pkg/mod/google.golang.org/[email protected]/balancer_conn_wrappers.go:60 +0x108

goroutine 54 [chan receive]:
google.golang.org/grpc.(*addrConn).resetTransport(0x3928480)
	/go/pkg/mod/google.golang.org/[email protected]/clientconn.go:1188 +0x464
created by google.golang.org/grpc.(*addrConn).connect
	/go/pkg/mod/google.golang.org/[email protected]/clientconn.go:825 +0xa8

goroutine 55 [chan receive (nil chan)]:
github.com/snowzach/doods/odrpc.RegisterOdrpcHandlerFromEndpoint.func1.1(0x98dd58, 0x3918020, 0x383cf00, 0x382a440, 0x9)
	/build/odrpc/rpc.pb.gw.go:237 +0x38
created by github.com/snowzach/doods/odrpc.RegisterOdrpcHandlerFromEndpoint.func1
	/build/odrpc/rpc.pb.gw.go:236 +0x164

goroutine 56 [IO wait]:
internal/poll.runtime_pollWait(0x9ddc7f48, 0x72, 0x0)
	/usr/local/go/src/runtime/netpoll.go:203 +0x44
internal/poll.(*pollDesc).wait(0x393c424, 0x72, 0x0, 0x0, 0x858244)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:87 +0x30
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:92
internal/poll.(*FD).Accept(0x393c410, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
	/usr/local/go/src/internal/poll/fd_unix.go:384 +0x1b0
net.(*netFD).accept(0x393c410, 0xeb35b8, 0x0, 0x1)
	/usr/local/go/src/net/fd_unix.go:238 +0x20
net.(*TCPListener).accept(0x380e320, 0x392aca0, 0xfe7d2800, 0x81874)
	/usr/local/go/src/net/tcpsock_posix.go:139 +0x20
net.(*TCPListener).Accept(0x380e320, 0x383af04, 0xc, 0x38015e0, 0x34bd2c)
	/usr/local/go/src/net/tcpsock.go:261 +0x54
net/http.(*Server).Serve(0x39122d0, 0x98c518, 0x380e320, 0x0, 0x0)
	/usr/local/go/src/net/http/server.go:2901 +0x1fc
github.com/snowzach/doods/server.(*Server).ListenAndServe.func3(0x380c4e0, 0x98c518, 0x380e320, 0x3900a70)
	/build/server/server.go:280 +0x30
created by github.com/snowzach/doods/server.(*Server).ListenAndServe
	/build/server/server.go:279 +0x5bc

goroutine 43 [IO wait]:
internal/poll.runtime_pollWait(0x9ddc7d8c, 0x72, 0xffffffff)
	/usr/local/go/src/runtime/netpoll.go:203 +0x44
internal/poll.(*pollDesc).wait(0x38f2334, 0x72, 0x8000, 0x8000, 0xffffffff)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:87 +0x30
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:92
internal/poll.(*FD).Read(0x38f2320, 0x399c000, 0x8000, 0x8000, 0x0, 0x0, 0x0)
	/usr/local/go/src/internal/poll/fd_unix.go:169 +0x174
net.(*netFD).Read(0x38f2320, 0x399c000, 0x8000, 0x8000, 0x0, 0x5a9d5c, 0x39fa0e0)
	/usr/local/go/src/net/fd_unix.go:202 +0x38
net.(*conn).Read(0x3896108, 0x399c000, 0x8000, 0x8000, 0x0, 0x0, 0x0)
	/usr/local/go/src/net/net.go:184 +0x64
bufio.(*Reader).Read(0x3890240, 0x38f4120, 0x9, 0x9, 0xed001, 0x3882700, 0x46314)
	/usr/local/go/src/bufio/bufio.go:226 +0x248
io.ReadAtLeast(0x984530, 0x3890240, 0x38f4120, 0x9, 0x9, 0x9, 0x41218, 0xc6148, 0x38cc8f0)
	/usr/local/go/src/io/io.go:310 +0x6c
io.ReadFull(...)
	/usr/local/go/src/io/io.go:329
golang.org/x/net/http2.readFrameHeader(0x38f4120, 0x9, 0x9, 0x984530, 0x3890240, 0x0, 0x0, 0x0, 0x38902c4, 0x0)
	/go/pkg/mod/golang.org/x/[email protected]/http2/frame.go:237 +0x54
golang.org/x/net/http2.(*Framer).ReadFrame(0x38f4100, 0x39fa0e0, 0x39fa0e0, 0x0, 0x0)
	/go/pkg/mod/golang.org/x/[email protected]/http2/frame.go:492 +0x74
google.golang.org/grpc/internal/transport.(*http2Client).reader(0x38fe120)
	/go/pkg/mod/google.golang.org/[email protected]/internal/transport/http2_client.go:1294 +0x10c
created by google.golang.org/grpc/internal/transport.newHTTP2Client
	/go/pkg/mod/google.golang.org/[email protected]/internal/transport/http2_client.go:310 +0xbcc

goroutine 39 [IO wait]:
internal/poll.runtime_pollWait(0x9ddc7eb4, 0x72, 0xffffffff)
	/usr/local/go/src/runtime/netpoll.go:203 +0x44
internal/poll.(*pollDesc).wait(0x38f21a4, 0x72, 0x8000, 0x8000, 0xffffffff)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:87 +0x30
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:92
internal/poll.(*FD).Read(0x38f2190, 0x38f6000, 0x8000, 0x8000, 0x0, 0x0, 0x0)
	/usr/local/go/src/internal/poll/fd_unix.go:169 +0x174
net.(*netFD).Read(0x38f2190, 0x38f6000, 0x8000, 0x8000, 0x0, 0x5a9d5c, 0x39193e0)
	/usr/local/go/src/net/fd_unix.go:202 +0x38
net.(*conn).Read(0x38960c8, 0x38f6000, 0x8000, 0x8000, 0x0, 0x0, 0x0)
	/usr/local/go/src/net/net.go:184 +0x64
bufio.(*Reader).Read(0x3890150, 0x38f40a0, 0x9, 0x9, 0xed000, 0x3a0e000, 0xc)
	/usr/local/go/src/bufio/bufio.go:226 +0x248
io.ReadAtLeast(0x984530, 0x3890150, 0x38f40a0, 0x9, 0x9, 0x9, 0x41218, 0xc6148, 0x38901b8)
	/usr/local/go/src/io/io.go:310 +0x6c
io.ReadFull(...)
	/usr/local/go/src/io/io.go:329
golang.org/x/net/http2.readFrameHeader(0x38f40a0, 0x9, 0x9, 0x984530, 0x3890150, 0x0, 0x0, 0x0, 0x38901d4, 0x0)
	/go/pkg/mod/golang.org/x/[email protected]/http2/frame.go:237 +0x54
golang.org/x/net/http2.(*Framer).ReadFrame(0x38f4080, 0x39193e0, 0x39193e0, 0x0, 0x0)
	/go/pkg/mod/golang.org/x/[email protected]/http2/frame.go:492 +0x74
google.golang.org/grpc/internal/transport.(*http2Client).reader(0x38fe000)
	/go/pkg/mod/google.golang.org/[email protected]/internal/transport/http2_client.go:1294 +0x10c
created by google.golang.org/grpc/internal/transport.newHTTP2Client
	/go/pkg/mod/google.golang.org/[email protected]/internal/transport/http2_client.go:310 +0xbcc

goroutine 40 [select]:
google.golang.org/grpc/internal/transport.(*controlBuffer).get(0x38901b0, 0x1, 0x0, 0x0, 0x0, 0x0)
	/go/pkg/mod/google.golang.org/[email protected]/internal/transport/controlbuf.go:395 +0xc8
google.golang.org/grpc/internal/transport.(*loopyWriter).run(0x38901e0, 0x0, 0x0)
	/go/pkg/mod/google.golang.org/[email protected]/internal/transport/controlbuf.go:515 +0x1a8
google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0x38fe000)
	/go/pkg/mod/google.golang.org/[email protected]/internal/transport/http2_client.go:356 +0x5c
created by google.golang.org/grpc/internal/transport.newHTTP2Client
	/go/pkg/mod/google.golang.org/[email protected]/internal/transport/http2_client.go:354 +0xdc0

goroutine 25 [select]:
golang.org/x/net/http2.(*serverConn).serve(0x3a16380)
	/go/pkg/mod/golang.org/x/[email protected]/http2/server.go:859 +0x460
golang.org/x/net/http2.(*Server).ServeConn(0x386a510, 0x9906e8, 0x3a0a220, 0x38ebd44)
	/go/pkg/mod/golang.org/x/[email protected]/http2/server.go:472 +0x5d0
golang.org/x/net/http2/h2c.h2cHandler.ServeHTTP(0x986750, 0x380e210, 0x386a510, 0x98c678, 0x38a0000, 0x39fe000)
	/go/pkg/mod/golang.org/x/[email protected]/http2/h2c/h2c.go:87 +0x1c0
net/http.serverHandler.ServeHTTP(0x39122d0, 0x98c678, 0x38a0000, 0x39fe000)
	/usr/local/go/src/net/http/server.go:2807 +0x88
net/http.(*conn).serve(0x39f6000, 0x98dd38, 0x3a0a0a0)
	/usr/local/go/src/net/http/server.go:1895 +0x7d4
created by net/http.(*Server).Serve
	/usr/local/go/src/net/http/server.go:2933 +0x2d0

goroutine 31 [select]:
golang.org/x/net/http2.(*serverConn).serve(0x3a16620)
	/go/pkg/mod/golang.org/x/[email protected]/http2/server.go:859 +0x460
golang.org/x/net/http2.(*Server).ServeConn(0x386a510, 0x9906e8, 0x3a0a5a0, 0x38ecd44)
	/go/pkg/mod/golang.org/x/[email protected]/http2/server.go:472 +0x5d0
golang.org/x/net/http2/h2c.h2cHandler.ServeHTTP(0x986750, 0x380e210, 0x386a510, 0x98c678, 0x38a01b0, 0x39fe180)
	/go/pkg/mod/golang.org/x/[email protected]/http2/h2c/h2c.go:87 +0x1c0
net/http.serverHandler.ServeHTTP(0x39122d0, 0x98c678, 0x38a01b0, 0x39fe180)
	/usr/local/go/src/net/http/server.go:2807 +0x88
net/http.(*conn).serve(0x39f60c0, 0x98dd38, 0x3a0a460)
	/usr/local/go/src/net/http/server.go:1895 +0x7d4
created by net/http.(*Server).Serve
	/usr/local/go/src/net/http/server.go:2933 +0x2d0

goroutine 29 [IO wait]:
internal/poll.runtime_pollWait(0x9ddc7e20, 0x72, 0xffffffff)
	/usr/local/go/src/runtime/netpoll.go:203 +0x44
internal/poll.(*pollDesc).wait(0x39f2014, 0x72, 0x1000, 0x1000, 0xffffffff)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:87 +0x30
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:92
internal/poll.(*FD).Read(0x39f2000, 0x39fc000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
	/usr/local/go/src/internal/poll/fd_unix.go:169 +0x174
net.(*netFD).Read(0x39f2000, 0x39fc000, 0x1000, 0x1000, 0x39fc001, 0x0, 0x342e5c)
	/usr/local/go/src/net/fd_unix.go:202 +0x38
net.(*conn).Read(0x39f4000, 0x39fc000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
	/usr/local/go/src/net/net.go:184 +0x64
net/http.(*connReader).Read(0x3a0a0c0, 0x39fc000, 0x1000, 0x1000, 0x10001, 0x19b9, 0xd)
	/usr/local/go/src/net/http/server.go:786 +0x160
bufio.(*Reader).Read(0x39f8030, 0x39fe120, 0x9, 0x9, 0x0, 0x0, 0x0)
	/usr/local/go/src/bufio/bufio.go:226 +0x248
io.(*multiReader).Read(0x38b4010, 0x39fe120, 0x9, 0x9, 0x46138, 0x79f00, 0x46198)
	/usr/local/go/src/io/multi.go:26 +0xa8
golang.org/x/net/http2/h2c.(*rwConn).Read(0x3a0a220, 0x39fe120, 0x9, 0x9, 0x3a161c0, 0x3, 0x39f0130)
	/go/pkg/mod/golang.org/x/[email protected]/http2/h2c/h2c.go:287 +0x40
io.ReadAtLeast(0x9ddcc3d0, 0x3a0a220, 0x39fe120, 0x9, 0x9, 0x9, 0x7470001, 0x3a40002, 0x0)
	/usr/local/go/src/io/io.go:310 +0x6c
io.ReadFull(...)
	/usr/local/go/src/io/io.go:329
golang.org/x/net/http2.readFrameHeader(0x39fe120, 0x9, 0x9, 0x9ddcc3d0, 0x3a0a220, 0x0, 0x0, 0x0, 0x39bfc6c, 0x3a43f5c)
	/go/pkg/mod/golang.org/x/[email protected]/http2/frame.go:237 +0x54
golang.org/x/net/http2.(*Framer).ReadFrame(0x39fe100, 0x3a43f58, 0x2, 0x0, 0x1)
	/go/pkg/mod/golang.org/x/[email protected]/http2/frame.go:492 +0x74
golang.org/x/net/http2.(*serverConn).readFrames(0x3a16380)
	/go/pkg/mod/golang.org/x/[email protected]/http2/server.go:745 +0x74
created by golang.org/x/net/http2.(*serverConn).serve
	/go/pkg/mod/golang.org/x/[email protected]/http2/server.go:851 +0x2cc

goroutine 44 [select]:
google.golang.org/grpc/internal/transport.(*controlBuffer).get(0x38902a0, 0x1, 0x0, 0x0, 0x0, 0x0)
	/go/pkg/mod/google.golang.org/[email protected]/internal/transport/controlbuf.go:395 +0xc8
google.golang.org/grpc/internal/transport.(*loopyWriter).run(0x38902d0, 0x0, 0x0)
	/go/pkg/mod/google.golang.org/[email protected]/internal/transport/controlbuf.go:515 +0x1a8
google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0x38fe120)
	/go/pkg/mod/google.golang.org/[email protected]/internal/transport/http2_client.go:356 +0x5c
created by google.golang.org/grpc/internal/transport.newHTTP2Client
	/go/pkg/mod/google.golang.org/[email protected]/internal/transport/http2_client.go:354 +0xdc0

goroutine 96 [select]:
google.golang.org/grpc/internal/transport.(*serverHandlerTransport).HandleStreams.func1(0x39f0e80, 0x39f2a50, 0x39f4588)
	/go/pkg/mod/google.golang.org/[email protected]/internal/transport/handler_server.go:344 +0xa4
created by google.golang.org/grpc/internal/transport.(*serverHandlerTransport).HandleStreams
	/go/pkg/mod/google.golang.org/[email protected]/internal/transport/handler_server.go:343 +0xac

goroutine 67 [IO wait]:
internal/poll.runtime_pollWait(0x9ddc7cf8, 0x72, 0xffffffff)
	/usr/local/go/src/runtime/netpoll.go:203 +0x44
internal/poll.(*pollDesc).wait(0x39f2104, 0x72, 0x1000, 0x1000, 0xffffffff)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:87 +0x30
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:92
internal/poll.(*FD).Read(0x39f20f0, 0x38c6000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
	/usr/local/go/src/internal/poll/fd_unix.go:169 +0x174
net.(*netFD).Read(0x39f20f0, 0x38c6000, 0x1000, 0x1000, 0x1, 0x0, 0x342e5c)
	/usr/local/go/src/net/fd_unix.go:202 +0x38
net.(*conn).Read(0x39f4050, 0x38c6000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
	/usr/local/go/src/net/net.go:184 +0x64
net/http.(*connReader).Read(0x3a0a480, 0x38c6000, 0x1000, 0x1000, 0x10401, 0x0, 0x0)
	/usr/local/go/src/net/http/server.go:786 +0x160
bufio.(*Reader).Read(0x39f8090, 0x39fe2a0, 0x9, 0x9, 0x0, 0x0, 0x0)
	/usr/local/go/src/bufio/bufio.go:226 +0x248
io.(*multiReader).Read(0x38b4060, 0x39fe2a0, 0x9, 0x9, 0x39f03f0, 0x1, 0x0)
	/usr/local/go/src/io/multi.go:26 +0xa8
golang.org/x/net/http2/h2c.(*rwConn).Read(0x3a0a5a0, 0x39fe2a0, 0x9, 0x9, 0x1, 0x8b2b4, 0x39f03f0)
	/go/pkg/mod/golang.org/x/[email protected]/http2/h2c/h2c.go:287 +0x40
io.ReadAtLeast(0x9ddcc3d0, 0x3a0a5a0, 0x39fe2a0, 0x9, 0x9, 0x9, 0x3890001, 0x5a0001, 0x0)
	/usr/local/go/src/io/io.go:310 +0x6c
io.ReadFull(...)
	/usr/local/go/src/io/io.go:329
golang.org/x/net/http2.readFrameHeader(0x39fe2a0, 0x9, 0x9, 0x9ddcc3d0, 0x3a0a5a0, 0x0, 0x0, 0x0, 0x38603ac, 0x397775c)
	/go/pkg/mod/golang.org/x/[email protected]/http2/frame.go:237 +0x54
golang.org/x/net/http2.(*Framer).ReadFrame(0x39fe280, 0x3977758, 0x2, 0x0, 0x1)
	/go/pkg/mod/golang.org/x/[email protected]/http2/frame.go:492 +0x74
golang.org/x/net/http2.(*serverConn).readFrames(0x3a16620)
	/go/pkg/mod/golang.org/x/[email protected]/http2/server.go:745 +0x74
created by golang.org/x/net/http2.(*serverConn).serve
	/go/pkg/mod/golang.org/x/[email protected]/http2/server.go:851 +0x2cc

goroutine 45 [select]:
google.golang.org/grpc/internal/transport.(*Stream).waitOnHeader(0x3912c60)
	/go/pkg/mod/google.golang.org/[email protected]/internal/transport/transport.go:321 +0x94
google.golang.org/grpc/internal/transport.(*Stream).RecvCompress(...)
	/go/pkg/mod/google.golang.org/[email protected]/internal/transport/transport.go:336
google.golang.org/grpc.(*csAttempt).recvMsg(0x393cc80, 0x838da0, 0x380d400, 0x0, 0xffffffff, 0x0)
	/go/pkg/mod/google.golang.org/[email protected]/stream.go:894 +0x540
google.golang.org/grpc.(*clientStream).RecvMsg.func1(0x393cc80, 0x1575b, 0x0)
	/go/pkg/mod/google.golang.org/[email protected]/stream.go:759 +0x34
google.golang.org/grpc.(*clientStream).withRetry(0x3a50140, 0x3a5d448, 0x3a5d430, 0x0, 0x3a4cf80)
	/go/pkg/mod/google.golang.org/[email protected]/stream.go:617 +0x60
google.golang.org/grpc.(*clientStream).RecvMsg(0x3a50140, 0x838da0, 0x380d400, 0x0, 0x0)
	/go/pkg/mod/google.golang.org/[email protected]/stream.go:758 +0xcc
google.golang.org/grpc.invoke(0x98dd98, 0x380d3a0, 0x861371, 0x13, 0x83cd40, 0x391aac0, 0x838da0, 0x380d400, 0x383cf00, 0x380e630, ...)
	/go/pkg/mod/google.golang.org/[email protected]/call.go:73 +0xdc
google.golang.org/grpc.(*ClientConn).Invoke(0x383cf00, 0x98dd98, 0x380d3a0, 0x861371, 0x13, 0x83cd40, 0x391aac0, 0x838da0, 0x380d400, 0x380e630, ...)
	/go/pkg/mod/google.golang.org/[email protected]/call.go:37 +0x108
github.com/snowzach/doods/odrpc.(*odrpcClient).Detect(0x39010d0, 0x98dd98, 0x380d3a0, 0x391aac0, 0x380e630, 0x2, 0x2, 0x98dd98, 0x380d3a0, 0x0)
	/build/odrpc/rpc.pb.go:967 +0x88
github.com/snowzach/doods/odrpc.request_Odrpc_Detect_0(0x98dd98, 0x380d3a0, 0x98e870, 0xeb1f48, 0x98bc58, 0x39010d0, 0x392ca00, 0x380d2e0, 0x0, 0x74482d58, ...)
	/build/odrpc/rpc.pb.gw.go:62 +0x29c
github.com/snowzach/doods/odrpc.RegisterOdrpcHandlerClient.func2(0x9d32e038, 0x380d2c0, 0x392ca00, 0x380d2e0)
	/build/odrpc/rpc.pb.gw.go:289 +0x134
github.com/grpc-ecosystem/grpc-gateway/runtime.(*ServeMux).ServeHTTP(0x391a340, 0x9d32e038, 0x380d2c0, 0x392ca00)
	/go/pkg/mod/github.com/grpc-ecosystem/[email protected]/runtime/mux.go:240 +0x8d0
github.com/snowzach/doods/server.(*Server).ListenAndServe.func2(0x9d32e038, 0x380d2c0, 0x392ca00)
	/build/server/server.go:268 +0x34
net/http.HandlerFunc.ServeHTTP(...)
	/usr/local/go/src/net/http/server.go:2012
github.com/go-chi/chi.(*Mux).routeHTTP(0x386a030, 0x9d32e038, 0x380d2c0, 0x392ca00)
	/go/pkg/mod/github.com/go-chi/[email protected]/mux.go:437 +0x1f4
net/http.HandlerFunc.ServeHTTP(0x3900928, 0x9d32e038, 0x380d2c0, 0x392ca00)
	/usr/local/go/src/net/http/server.go:2012 +0x34
github.com/go-chi/cors.(*Cors).Handler.func1(0x9d32e038, 0x380d2c0, 0x392ca00)
	/go/pkg/mod/github.com/go-chi/[email protected]/cors.go:228 +0x17c
net/http.HandlerFunc.ServeHTTP(0x380e220, 0x9d32e038, 0x380d2c0, 0x392ca00)
	/usr/local/go/src/net/http/server.go:2012 +0x34
github.com/snowzach/doods/server.New.func2.1(0x98c678, 0x3912ab0, 0x392ca00)
	/build/server/server.go:114 +0x1a8
net/http.HandlerFunc.ServeHTTP(0x380e230, 0x98c678, 0x3912ab0, 0x392ca00)
	/usr/local/go/src/net/http/server.go:2012 +0x34
github.com/go-chi/render.SetContentType.func1.1(0x98c678, 0x3912ab0, 0x392c980)
	/go/pkg/mod/github.com/go-chi/[email protected]/content_type.go:52 +0x138
net/http.HandlerFunc.ServeHTTP(0x380e240, 0x98c678, 0x3912ab0, 0x392c980)
	/usr/local/go/src/net/http/server.go:2012 +0x34
github.com/go-chi/chi/middleware.Recoverer.func1(0x98c678, 0x3912ab0, 0x392c980)
	/go/pkg/mod/github.com/go-chi/[email protected]/middleware/recoverer.go:37 +0x6c
net/http.HandlerFunc.ServeHTTP(0x380e250, 0x98c678, 0x3912ab0, 0x392c980)
	/usr/local/go/src/net/http/server.go:2012 +0x34
github.com/go-chi/chi/middleware.RequestID.func1(0x98c678, 0x3912ab0, 0x392c900)
	/go/pkg/mod/github.com/go-chi/[email protected]/middleware/request_id.go:76 +0x168
net/http.HandlerFunc.ServeHTTP(0x380e260, 0x98c678, 0x3912ab0, 0x392c900)
	/usr/local/go/src/net/http/server.go:2012 +0x34
github.com/go-chi/chi.(*Mux).ServeHTTP(0x386a030, 0x98c678, 0x3912ab0, 0x39fe900)
	/go/pkg/mod/github.com/go-chi/[email protected]/mux.go:86 +0x220
github.com/snowzach/doods/server.New.func3(0x98c678, 0x3912ab0, 0x39fe900)
	/build/server/server.go:185 +0x7c
net/http.HandlerFunc.ServeHTTP(0x380e210, 0x98c678, 0x3912ab0, 0x39fe900)
	/usr/local/go/src/net/http/server.go:2012 +0x34
golang.org/x/net/http2/h2c.h2cHandler.ServeHTTP(0x986750, 0x380e210, 0x386a510, 0x98c678, 0x3912ab0, 0x39fe900)
	/go/pkg/mod/golang.org/x/[email protected]/http2/h2c/h2c.go:98 +0x274
net/http.serverHandler.ServeHTTP(0x39122d0, 0x98c678, 0x3912ab0, 0x39fe900)
	/usr/local/go/src/net/http/server.go:2807 +0x88
net/http.(*conn).serve(0x38a42a0, 0x98dd38, 0x3a0b540)
	/usr/local/go/src/net/http/server.go:1895 +0x7d4
created by net/http.(*Server).Serve
	/usr/local/go/src/net/http/server.go:2933 +0x2d0

goroutine 46 [select]:
google.golang.org/grpc/internal/transport.(*Stream).waitOnHeader(0x38a0990)
	/go/pkg/mod/google.golang.org/[email protected]/internal/transport/transport.go:321 +0x94
google.golang.org/grpc/internal/transport.(*Stream).RecvCompress(...)
	/go/pkg/mod/google.golang.org/[email protected]/internal/transport/transport.go:336
google.golang.org/grpc.(*csAttempt).recvMsg(0x39f2910, 0x838da0, 0x3a0b940, 0x0, 0xffffffff, 0x0)
	/go/pkg/mod/google.golang.org/[email protected]/stream.go:894 +0x540
google.golang.org/grpc.(*clientStream).RecvMsg.func1(0x39f2910, 0x18080, 0x0)
	/go/pkg/mod/google.golang.org/[email protected]/stream.go:759 +0x34
google.golang.org/grpc.(*clientStream).withRetry(0x38d2280, 0x3a59448, 0x3a59430, 0x0, 0x39f4508)
	/go/pkg/mod/google.golang.org/[email protected]/stream.go:617 +0x60
google.golang.org/grpc.(*clientStream).RecvMsg(0x38d2280, 0x838da0, 0x3a0b940, 0x0, 0x0)
	/go/pkg/mod/google.golang.org/[email protected]/stream.go:758 +0xcc
google.golang.org/grpc.invoke(0x98dd98, 0x3a0b7c0, 0x861371, 0x13, 0x83cd40, 0x39bf100, 0x838da0, 0x3a0b940, 0x383cf00, 0x38b45a0, ...)
	/go/pkg/mod/google.golang.org/[email protected]/call.go:73 +0xdc
google.golang.org/grpc.(*ClientConn).Invoke(0x383cf00, 0x98dd98, 0x3a0b7c0, 0x861371, 0x13, 0x83cd40, 0x39bf100, 0x838da0, 0x3a0b940, 0x38b45a0, ...)
	/go/pkg/mod/google.golang.org/[email protected]/call.go:37 +0x108
github.com/snowzach/doods/odrpc.(*odrpcClient).Detect(0x39010d0, 0x98dd98, 0x3a0b7c0, 0x39bf100, 0x38b45a0, 0x2, 0x2, 0x98dd98, 0x3a0b7c0, 0x0)
	/build/odrpc/rpc.pb.go:967 +0x88
github.com/snowzach/doods/odrpc.request_Odrpc_Detect_0(0x98dd98, 0x3a0b7c0, 0x98e870, 0xeb1f48, 0x98bc58, 0x39010d0, 0x39fea80, 0x3a0b700, 0x0, 0x74482d58, ...)
	/build/odrpc/rpc.pb.gw.go:62 +0x29c
github.com/snowzach/doods/odrpc.RegisterOdrpcHandlerClient.func2(0x9d32e038, 0x3a0b6e0, 0x39fea80, 0x3a0b700)
	/build/odrpc/rpc.pb.gw.go:289 +0x134
github.com/grpc-ecosystem/grpc-gateway/runtime.(*ServeMux).ServeHTTP(0x391a340, 0x9d32e038, 0x3a0b6e0, 0x39fea80)
	/go/pkg/mod/github.com/grpc-ecosystem/[email protected]/runtime/mux.go:240 +0x8d0
github.com/snowzach/doods/server.(*Server).ListenAndServe.func2(0x9d32e038, 0x3a0b6e0, 0x39fea80)
	/build/server/server.go:268 +0x34
net/http.HandlerFunc.ServeHTTP(...)
	/usr/local/go/src/net/http/server.go:2012
github.com/go-chi/chi.(*Mux).routeHTTP(0x386a030, 0x9d32e038, 0x3a0b6e0, 0x39fea80)
	/go/pkg/mod/github.com/go-chi/[email protected]/mux.go:437 +0x1f4
net/http.HandlerFunc.ServeHTTP(0x3900928, 0x9d32e038, 0x3a0b6e0, 0x39fea80)
	/usr/local/go/src/net/http/server.go:2012 +0x34
github.com/go-chi/cors.(*Cors).Handler.func1(0x9d32e038, 0x3a0b6e0, 0x39fea80)
	/go/pkg/mod/github.com/go-chi/[email protected]/cors.go:228 +0x17c
net/http.HandlerFunc.ServeHTTP(0x380e220, 0x9d32e038, 0x3a0b6e0, 0x39fea80)
	/usr/local/go/src/net/http/server.go:2012 +0x34
github.com/snowzach/doods/server.New.func2.1(0x98c678, 0x38a0750, 0x39fea80)
	/build/server/server.go:114 +0x1a8
net/http.HandlerFunc.ServeHTTP(0x380e230, 0x98c678, 0x38a0750, 0x39fea80)
	/usr/local/go/src/net/http/server.go:2012 +0x34
github.com/go-chi/render.SetContentType.func1.1(0x98c678, 0x38a0750, 0x39fea00)
	/go/pkg/mod/github.com/go-chi/[email protected]/content_type.go:52 +0x138
net/http.HandlerFunc.ServeHTTP(0x380e240, 0x98c678, 0x38a0750, 0x39fea00)
	/usr/local/go/src/net/http/server.go:2012 +0x34
github.com/go-chi/chi/middleware.Recoverer.func1(0x98c678, 0x38a0750, 0x39fea00)
	/go/pkg/mod/github.com/go-chi/[email protected]/middleware/recoverer.go:37 +0x6c
net/http.HandlerFunc.ServeHTTP(0x380e250, 0x98c678, 0x38a0750, 0x39fea00)
	/usr/local/go/src/net/http/server.go:2012 +0x34
github.com/go-chi/chi/middleware.RequestID.func1(0x98c678, 0x38a0750, 0x39fe980)
	/go/pkg/mod/github.com/go-chi/[email protected]/middleware/request_id.go:76 +0x168
net/http.HandlerFunc.ServeHTTP(0x380e260, 0x98c678, 0x38a0750, 0x39fe980)
	/usr/local/go/src/net/http/server.go:2012 +0x34
github.com/go-chi/chi.(*Mux).ServeHTTP(0x386a030, 0x98c678, 0x38a0750, 0x38f4300)
	/go/pkg/mod/github.com/go-chi/[email protected]/mux.go:86 +0x220
github.com/snowzach/doods/server.New.func3(0x98c678, 0x38a0750, 0x38f4300)
	/build/server/server.go:185 +0x7c
net/http.HandlerFunc.ServeHTTP(0x380e210, 0x98c678, 0x38a0750, 0x38f4300)
	/usr/local/go/src/net/http/server.go:2012 +0x34
golang.org/x/net/http2/h2c.h2cHandler.ServeHTTP(0x986750, 0x380e210, 0x386a510, 0x98c678, 0x38a0750, 0x38f4300)
	/go/pkg/mod/golang.org/x/[email protected]/http2/h2c/h2c.go:98 +0x274
net/http.serverHandler.ServeHTTP(0x39122d0, 0x98c678, 0x38a0750, 0x38f4300)
	/usr/local/go/src/net/http/server.go:2807 +0x88
net/http.(*conn).serve(0x38a4300, 0x98dd38, 0x3893100)
	/usr/local/go/src/net/http/server.go:1895 +0x7d4
created by net/http.(*Server).Serve
	/usr/local/go/src/net/http/server.go:2933 +0x2d0

goroutine 91 [IO wait]:
internal/poll.runtime_pollWait(0x9ddc7bd0, 0x72, 0xffffffff)
	/usr/local/go/src/runtime/netpoll.go:203 +0x44
internal/poll.(*pollDesc).wait(0x38f2654, 0x72, 0x0, 0x1, 0xffffffff)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:87 +0x30
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:92
internal/poll.(*FD).Read(0x38f2640, 0x389312d, 0x1, 0x1, 0x0, 0x0, 0x0)
	/usr/local/go/src/internal/poll/fd_unix.go:169 +0x174
net.(*netFD).Read(0x38f2640, 0x389312d, 0x1, 0x1, 0x39be8c0, 0x0, 0x3a0ab30)
	/usr/local/go/src/net/fd_unix.go:202 +0x38
net.(*conn).Read(0x3896508, 0x389312d, 0x1, 0x1, 0x0, 0x0, 0x0)
	/usr/local/go/src/net/net.go:184 +0x64
net/http.(*connReader).backgroundRead(0x3893120)
	/usr/local/go/src/net/http/server.go:678 +0x44
created by net/http.(*connReader).startBackgroundRead
	/usr/local/go/src/net/http/server.go:674 +0xc0

goroutine 133 [select]:
google.golang.org/grpc/internal/transport.(*Stream).waitOnHeader(0x38a0bd0)
	/go/pkg/mod/google.golang.org/[email protected]/internal/transport/transport.go:321 +0x94
google.golang.org/grpc/internal/transport.(*Stream).RecvCompress(...)
	/go/pkg/mod/google.golang.org/[email protected]/internal/transport/transport.go:336
google.golang.org/grpc.(*csAttempt).recvMsg(0x39f2c30, 0x838da0, 0x727e2c0, 0x0, 0xffffffff, 0x0)
	/go/pkg/mod/google.golang.org/[email protected]/stream.go:894 +0x540
google.golang.org/grpc.(*clientStream).RecvMsg.func1(0x39f2c30, 0x30074, 0x0)
	/go/pkg/mod/google.golang.org/[email protected]/stream.go:759 +0x34
google.golang.org/grpc.(*clientStream).withRetry(0x38d23c0, 0x3a5b448, 0x3a5b430, 0x0, 0x39f4648)
	/go/pkg/mod/google.golang.org/[email protected]/stream.go:617 +0x60
google.golang.org/grpc.(*clientStream).RecvMsg(0x38d23c0, 0x838da0, 0x727e2c0, 0x0, 0x0)
	/go/pkg/mod/google.golang.org/[email protected]/stream.go:758 +0xcc
google.golang.org/grpc.invoke(0x98dd98, 0x727e260, 0x861371, 0x13, 0x83cd40, 0x39bf8c0, 0x838da0, 0x727e2c0, 0x383cf00, 0x38b48e0, ...)
	/go/pkg/mod/google.golang.org/[email protected]/call.go:73 +0xdc
google.golang.org/grpc.(*ClientConn).Invoke(0x383cf00, 0x98dd98, 0x727e260, 0x861371, 0x13, 0x83cd40, 0x39bf8c0, 0x838da0, 0x727e2c0, 0x38b48e0, ...)
	/go/pkg/mod/google.golang.org/[email protected]/call.go:37 +0x108
github.com/snowzach/doods/odrpc.(*odrpcClient).Detect(0x39010d0, 0x98dd98, 0x727e260, 0x39bf8c0, 0x38b48e0, 0x2, 0x2, 0x98dd98, 0x727e260, 0x0)
	/build/odrpc/rpc.pb.go:967 +0x88
github.com/snowzach/doods/odrpc.request_Odrpc_Detect_0(0x98dd98, 0x727e260, 0x98e870, 0xeb1f48, 0x98bc58, 0x39010d0, 0x39fee80, 0x727e1a0, 0x0, 0x74482d58, ...)
	/build/odrpc/rpc.pb.gw.go:62 +0x29c
github.com/snowzach/doods/odrpc.RegisterOdrpcHandlerClient.func2(0x9d32e038, 0x727e180, 0x39fee80, 0x727e1a0)
	/build/odrpc/rpc.pb.gw.go:289 +0x134
github.com/grpc-ecosystem/grpc-gateway/runtime.(*ServeMux).ServeHTTP(0x391a340, 0x9d32e038, 0x727e180, 0x39fee80)
	/go/pkg/mod/github.com/grpc-ecosystem/[email protected]/runtime/mux.go:240 +0x8d0
github.com/snowzach/doods/server.(*Server).ListenAndServe.func2(0x9d32e038, 0x727e180, 0x39fee80)
	/build/server/server.go:268 +0x34
net/http.HandlerFunc.ServeHTTP(...)
	/usr/local/go/src/net/http/server.go:2012
github.com/go-chi/chi.(*Mux).routeHTTP(0x386a030, 0x9d32e038, 0x727e180, 0x39fee80)
	/go/pkg/mod/github.com/go-chi/[email protected]/mux.go:437 +0x1f4
net/http.HandlerFunc.ServeHTTP(0x3900928, 0x9d32e038, 0x727e180, 0x39fee80)
	/usr/local/go/src/net/http/server.go:2012 +0x34
github.com/go-chi/cors.(*Cors).Handler.func1(0x9d32e038, 0x727e180, 0x39fee80)
	/go/pkg/mod/github.com/go-chi/[email protected]/cors.go:228 +0x17c
net/http.HandlerFunc.ServeHTTP(0x380e220, 0x9d32e038, 0x727e180, 0x39fee80)
	/usr/local/go/src/net/http/server.go:2012 +0x34
github.com/snowzach/doods/server.New.func2.1(0x98c678, 0x38a0ab0, 0x39fee80)
	/build/server/server.go:114 +0x1a8
net/http.HandlerFunc.ServeHTTP(0x380e230, 0x98c678, 0x38a0ab0, 0x39fee80)
	/usr/local/go/src/net/http/server.go:2012 +0x34
github.com/go-chi/render.SetContentType.func1.1(0x98c678, 0x38a0ab0, 0x39fee00)
	/go/pkg/mod/github.com/go-chi/[email protected]/content_type.go:52 +0x138
net/http.HandlerFunc.ServeHTTP(0x380e240, 0x98c678, 0x38a0ab0, 0x39fee00)
	/usr/local/go/src/net/http/server.go:2012 +0x34
github.com/go-chi/chi/middleware.Recoverer.func1(0x98c678, 0x38a0ab0, 0x39fee00)
	/go/pkg/mod/github.com/go-chi/[email protected]/middleware/recoverer.go:37 +0x6c
net/http.HandlerFunc.ServeHTTP(0x380e250, 0x98c678, 0x38a0ab0, 0x39fee00)
	/usr/local/go/src/net/http/server.go:2012 +0x34
github.com/go-chi/chi/middleware.RequestID.func1(0x98c678, 0x38a0ab0, 0x39fed80)
	/go/pkg/mod/github.com/go-chi/[email protected]/middleware/request_id.go:76 +0x168
net/http.HandlerFunc.ServeHTTP(0x380e260, 0x98c678, 0x38a0ab0, 0x39fed80)
	/usr/local/go/src/net/http/server.go:2012 +0x34
github.com/go-chi/chi.(*Mux).ServeHTTP(0x386a030, 0x98c678, 0x38a0ab0, 0x39fed00)
	/go/pkg/mod/github.com/go-chi/[email protected]/mux.go:86 +0x220
github.com/snowzach/doods/server.New.func3(0x98c678, 0x38a0ab0, 0x39fed00)
	/build/server/server.go:185 +0x7c
net/http.HandlerFunc.ServeHTTP(0x380e210, 0x98c678, 0x38a0ab0, 0x39fed00)
	/usr/local/go/src/net/http/server.go:2012 +0x34
golang.org/x/net/http2/h2c.h2cHandler.ServeHTTP(0x986750, 0x380e210, 0x386a510, 0x98c678, 0x38a0ab0, 0x39fed00)
	/go/pkg/mod/golang.org/x/[email protected]/http2/h2c/h2c.go:98 +0x274
net/http.serverHandler.ServeHTTP(0x39122d0, 0x98c678, 0x38a0ab0, 0x39fed00)
	/usr/local/go/src/net/http/server.go:2807 +0x88
net/http.(*conn).serve(0x39f6360, 0x98dd38, 0x3a0bfe0)
	/usr/local/go/src/net/http/server.go:1895 +0x7d4
created by net/http.(*Server).Serve
	/usr/local/go/src/net/http/server.go:2933 +0x2d0

goroutine 104 [select]:
google.golang.org/grpc/internal/transport.(*serverHandlerTransport).HandleStreams.func1(0x38ccf80, 0x38f2b40, 0x3896950)
	/go/pkg/mod/google.golang.org/[email protected]/internal/transport/handler_server.go:344 +0xa4
created by google.golang.org/grpc/internal/transport.(*serverHandlerTransport).HandleStreams
	/go/pkg/mod/google.golang.org/[email protected]/internal/transport/handler_server.go:343 +0xac

goroutine 9 [select]:
google.golang.org/grpc/internal/transport.(*serverHandlerTransport).HandleStreams.func1(0x3920940, 0x38bc410, 0x3810200)
	/go/pkg/mod/google.golang.org/[email protected]/internal/transport/handler_server.go:344 +0xa4
created by google.golang.org/grpc/internal/transport.(*serverHandlerTransport).HandleStreams
	/go/pkg/mod/google.golang.org/[email protected]/internal/transport/handler_server.go:343 +0xac

goroutine 58 [IO wait]:
internal/poll.runtime_pollWait(0x9ddc7c64, 0x72, 0xffffffff)
	/usr/local/go/src/runtime/netpoll.go:203 +0x44
internal/poll.(*pollDesc).wait(0x38f2604, 0x72, 0x0, 0x1, 0xffffffff)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:87 +0x30
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:92
internal/poll.(*FD).Read(0x38f25f0, 0x3a0b56d, 0x1, 0x1, 0x0, 0x0, 0x0)
	/usr/local/go/src/internal/poll/fd_unix.go:terminate called recursively
169 +0x174
net.(*netFD).Read(0x38f25f0, 0x3a0b56d, 0x1, 0x1, 0x0, 0x0, 0x0)
	/usr/local/go/src/net/fd_unix.go:202 +0x38
net.(*conn).Read(0x3896500, 0x3a0b56d, 0x1, 0x1, 0x0, 0x0, 0x0)
	/usr/local/go/src/net/net.go:184 +0x64
net/http.(*connReader).backgroundRead(0x3a0b560)
	/usr/local/go/src/net/http/server.go:678 +0x44
created by net/http.(*connReader).startBackgroundRead
	/usr/local/go/src/net/http/server.go:674 +0xc0

goroutine 47 [select]:
google.golang.org/grpc/internal/transport.(*serverHandlerTransport).runStream(0x38bc410)
	/go/pkg/mod/google.golang.org/[email protected]/internal/transport/handler_server.go:426 +0x8c
google.golang.org/grpc/internal/transport.(*serverHandlerTransport).HandleStreams(0x38bc410, 0x399bc40, 0x8933c4)
	/go/pkg/mod/google.golang.org/[email protected]/internal/transport/handler_server.go:416 +0x430
google.golang.org/grpc.(*Server).serveStreams(0x392e000, 0x990b30, 0x38bc410)
	/go/pkg/mod/google.golang.org/[email protected]/server.go:855 +0xe4
google.golang.org/grpc.(*Server).ServeHTTP(0x392e000, 0x98c158, 0x3896568, 0x38f4400)
	/go/pkg/mod/google.golang.org/[email protected]/server.go:924 +0xcc
github.com/snowzach/doods/server.New.func3(0x98c158, 0x3896568, 0x38f4400)
	/build/server/server.go:183 +0x50
net/http.HandlerFunc.ServeHTTP(0x380e210, 0x98c158, 0x3896568, 0x38f4400)
	/usr/local/go/src/net/http/server.go:2012 +0x34
golang.org/x/net/http2.(*serverConn).runHandler(0x3a16380, 0x3896568, 0x38f4400, 0x3894450)
	/go/pkg/mod/golang.org/x/[email protected]/http2/server.go:2152 +0x74
created by golang.org/x/net/http2.(*serverConn).processHeaders
	/go/pkg/mod/golang.org/x/[email protected]/http2/server.go:1882 +0x418

goroutine 115 [IO wait]:
internal/poll.runtime_pollWait(0x9ddc7b3c, 0x72, 0xffffffff)
	/usr/local/go/src/runtime/netpoll.go:203 +0x44
internal/poll.(*pollDesc).wait(0x38f29c4, 0x72, 0x0, 0x1, 0xffffffff)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:87 +0x30
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:92
internal/poll.(*FD).Read(0x38f29b0, 0x747012d, 0x1, 0x1, 0x0, 0x0, 0x0)
	/usr/local/go/src/internal/poll/fd_unix.go:169 +0x174
net.(*netFD).Read(0x38f29b0, 0x747012d, 0x1, 0x1, 0x39209c0, 0x0, 0x9847d0)
	/usr/local/go/src/net/fd_unix.go:202 +0x38
net.(*conn).Read(0x38966f0, 0x747012d, 0x1, 0x1, 0x0, 0x0, 0x0)
	/usr/local/go/src/net/net.go:184 +0x64
net/http.(*connReader).backgroundRead(0x7470120)
	/usr/local/go/src/net/http/server.go:678 +0x44
created by net/http.(*connReader).startBackgroundRead
	/usr/local/go/src/net/http/server.go:674 +0xc0

goroutine 11 [syscall]:
github.com/tensorflow/tensorflow/tensorflow/go._Cfunc_TF_SessionRun(0x8f7f40e0, 0x0, 0x3896848, 0x3896850, 0x1, 0x3893a00, 0x3894610, 0x4, 0x0, 0x0, ...)
	_cgo_gotypes.go:1452 +0x30 fp=0x384b8ac sp=0x384b898 pc=0x67663c
github.com/tensorflow/tensorflow/tensorflow/go.(*Session).Run.func1(0x3902500, 0x3860980, 0x384bab4, 0x384baec, 0x4, 0x4, 0x0, 0x0, 0x0, 0x3896858)
	/go/pkg/mod/github.com/tensorflow/[email protected]+incompatible/tensorflow/go/session.go:149 +0x210 fp=0x384b900 sp=0x384b8ac pc=0x685158
github.com/tensorflow/tensorflow/tensorflow/go.(*Session).Run(0x3902500, 0x384bab4, 0x384baec, 0x4, 0x4, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	/go/pkg/mod/github.com/tensorflow/[email protected]+incompatible/tensorflow/go/session.go:149 +0x14c fp=0x384b948 sp=0x384b900 pc=0x67d0e8
github.com/snowzach/doods/detector/tensorflow.(*detector).Detect(0x388f3c0, 0x98dd98, 0x399bf80, 0x397a800, 0x0, 0x0, 0x0)
	/build/detector/tensorflow/tensorflow.go:187 +0xb18 fp=0x384bc88 sp=0x384b948 pc=0x689cbc
github.com/snowzach/doods/detector.(*Mux).Detect(0x3894510, 0x98dd98, 0x399bf80, 0x397a800, 0x3894510, 0x974c01, 0x9ddcc430)
	/build/detector/detector.go:130 +0xb4 fp=0x384bcc0 sp=0x384bc88 pc=0x69c090
github.com/snowzach/doods/odrpc._Odrpc_Detect_Handler.func1(0x98dd98, 0x399bf80, 0x83cd40, 0x397a800, 0x13, 0x98dd98, 0x399bf80, 0x0)
	/build/odrpc/rpc.pb.go:1050 +0x68 fp=0x384bce0 sp=0x384bcc0 pc=0x652b60
github.com/grpc-ecosystem/go-grpc-middleware/auth.UnaryServerInterceptor.func1(0x98dd98, 0x399bf80, 0x83cd40, 0x397a800, 0x390e370, 0x390e380, 0x7de558, 0x390e390, 0x0, 0x390e370)
	/go/pkg/mod/github.com/grpc-ecosystem/[email protected]/auth/auth.go:47 +0xd4 fp=0x384bd0c sp=0x384bce0 pc=0x6d7758
github.com/grpc-ecosystem/go-grpc-middleware.ChainUnaryServer.func1.1.1(0x98dd98, 0x399bf80, 0x83cd40, 0x397a800, 0x391e4b0, 0x71b4000, 0x383c5a0, 0x647248)
	/go/pkg/mod/github.com/grpc-ecosystem/[email protected]/chain.go:25 +0x4c fp=0x384bd38 sp=0x384bd0c pc=0x6d6c80
github.com/grpc-ecosystem/go-grpc-middleware.ChainUnaryServer.func1(0x98dd98, 0x399bf80, 0x83cd40, 0x397a800, 0x390e370, 0x390e380, 0x612d04, 0x80caa8, 0x399bf80, 0x807b90)
	/go/pkg/mod/github.com/grpc-ecosystem/[email protected]/chain.go:34 +0xbc fp=0x384bd6c sp=0x384bd38 pc=0x6d6e10
github.com/snowzach/doods/odrpc._Odrpc_Detect_Handler(0x808a90, 0x3894510, 0x98dd98, 0x399bf80, 0x391e4b0, 0x380c2c0, 0x98dd98, 0x399bf80, 0x71b4000, 0x18080)
	/build/odrpc/rpc.pb.go:1052 +0x108 fp=0x384bda0 sp=0x384bd6c pc=0x6472ac
google.golang.org/grpc.(*Server).processUnaryRPC(0x392e000, 0x990b30, 0x38bc410, 0x389e6c0, 0x380c5e0, 0xe50834, 0x0, 0x0, 0x0)
	/go/pkg/mod/google.golang.org/[email protected]/server.go:1210 +0x468 fp=0x384bf1c sp=0x384bda0 pc=0x612d4c
google.golang.org/grpc.(*Server).handleStream(0x392e000, 0x990b30, 0x38bc410, 0x389e6c0, 0x0)
	/go/pkg/mod/google.golang.org/[email protected]/server.go:1533 +0xa88 fp=0x384bfb0 sp=0x384bf1c pc=0x61631c
google.golang.org/grpc.(*Server).serveStreams.func1.2(0x3918560, 0x392e000, 0x990b30, 0x38bc410, 0x389e6c0)
	/go/pkg/mod/google.golang.org/[email protected]/server.go:871 +0x84 fp=0x384bfd4 sp=0x384bfb0 pc=0x621424
created by google.golang.org/grpc.(*Server).serveStreams.func1
	/go/pkg/mod/google.golang.org/[email protected]/server.go:869 +0x1b8

goroutine 48 [select]:
google.golang.org/grpc/internal/transport.(*serverHandlerTransport).runStream(0x38f2910)
	/go/pkg/mod/google.golang.org/[email protected]/internal/transport/handler_server.go:426 +0x8c
google.golang.org/grpc/internal/transport.(*serverHandlerTransport).HandleStreams(0x38f2910, 0x3893440, 0x8933c4)
	/go/pkg/mod/google.golang.org/[email protected]/internal/transport/handler_server.go:416 +0x430
google.golang.org/grpc.(*Server).serveStreams(0x392e000, 0x990b30, 0x38f2910)
	/go/pkg/mod/google.golang.org/[email protected]/server.go:855 +0xe4
google.golang.org/grpc.(*Server).ServeHTTP(0x392e000, 0x98c158, 0x38965c0, 0x38f4500)
	/go/pkg/mod/google.golang.org/[email protected]/server.go:924 +0xcc
github.com/snowzach/doods/server.New.func3(0x98c158, 0x38965c0, 0x38f4500)
	/build/server/server.go:183 +0x50
net/http.HandlerFunc.ServeHTTP(0x380e210, 0x98c158, 0x38965c0, 0x38f4500)
	/usr/local/go/src/net/http/server.go:2012 +0x34
golang.org/x/net/http2.(*serverConn).runHandler(0x3a16380, 0x38965c0, 0x38f4500, 0x3894470)
	/go/pkg/mod/golang.org/x/[email protected]/http2/server.go:2152 +0x74
created by golang.org/x/net/http2.(*serverConn).processHeaders
	/go/pkg/mod/golang.org/x/[email protected]/http2/server.go:1882 +0x418

goroutine 49 [select]:
google.golang.org/grpc/internal/transport.(*serverHandlerTransport).HandleStreams.func1(0x38ccd40, 0x38f2910, 0x38965f0)
	/go/pkg/mod/google.golang.org/[email protected]/internal/transport/handler_server.go:344 +0xa4
created by google.golang.org/grpc/internal/transport.(*serverHandlerTransport).HandleStreams
	/go/pkg/mod/google.golang.org/[email protected]/internal/transport/handler_server.go:343 +0xac

goroutine 64 [IO wait]:
internal/poll.runtime_pollWait(0x9ddc7a14, 0x72, 0xffffffff)
	/usr/local/go/src/runtime/netpoll.go:203 +0x44
internal/poll.(*pollDesc).wait(0x38bc604, 0x72, 0x0, 0x1, 0xffffffff)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:87 +0x30
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:92
internal/poll.(*FD).Read(0x38bc5f0, 0x380d82d, 0x1, 0x1, 0x0, 0x0, 0x0)
	/usr/local/go/src/internal/poll/fd_unix.go:169 +0x174
net.(*netFD).Read(0x38bc5f0, 0x380d82d, 0x1, 0x1, 0x1, 0xabcd8, 0x39be280)
	/usr/local/go/src/net/fd_unix.go:202 +0x38
net.(*conn).Read(0x3810578, 0x380d82d, 0x1, 0x1, 0x0, 0x0, 0x0)
	/usr/local/go/src/net/net.go:184 +0x64
net/http.(*connReader).backgroundRead(0x380d820)
	/usr/local/go/src/net/http/server.go:678 +0x44
created by net/http.(*connReader).startBackgroundRead
	/usr/local/go/src/net/http/server.go:674 +0xc0

goroutine 99 [syscall]:
github.com/tensorflow/tensorflow/tensorflow/go._Cfunc_TF_SessionRun(0x8ed06c00, 0x0, 0x38968c8, 0x38968d0, 0x1, 0x3893a20, 0x3894640, 0x4, 0x0, 0x0, ...)
	_cgo_gotypes.go:1452 +0x30 fp=0x38478ac sp=0x3847898 pc=0x67663c
github.com/tensorflow/tensorflow/tensorflow/go.(*Session).Run.func1(0x3902600, 0x38609c0, 0x3847ab4, 0x3847aec, 0x4, 0x4, 0x0, 0x0, 0x0, 0x38968d8)
	/go/pkg/mod/github.com/tensorflow/[email protected]+incompatible/tensorflow/go/session.go:149 +0x210 fp=0x3847900 sp=0x38478ac pc=0x685158
github.com/tensorflow/tensorflow/tensorflow/go.(*Session).Run(0x3902600, 0x3847ab4, 0x3847aec, 0x4, 0x4, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	/go/pkg/mod/github.com/tensorflow/[email protected]+incompatible/tensorflow/go/session.go:149 +0x14c fp=0x3847948 sp=0x3847900 pc=0x67d0e8
github.com/snowzach/doods/detector/tensorflow.(*detector).Detect(0x388f3c0, 0x98dd98, 0x3893580, 0x3860880, 0x0, 0x0, 0x0)
	/build/detector/tensorflow/tensorflow.go:187 +0xb18 fp=0x3847c88 sp=0x3847948 pc=0x689cbc
github.com/snowzach/doods/detector.(*Mux).Detect(0x3894510, 0x98dd98, 0x3893580, 0x3860880, 0x3894510, 0x974c01, 0x9ddcc430)
	/build/detector/detector.go:130 +0xb4 fp=0x3847cc0 sp=0x3847c88 pc=0x69c090
github.com/snowzach/doods/odrpc._Odrpc_Detect_Handler.func1(0x98dd98, 0x3893580, 0x83cd40, 0x3860880, 0x13, 0x98dd98, 0x3893580, 0x0)
	/build/odrpc/rpc.pb.go:1050 +0x68 fp=0x3847ce0 sp=0x3847cc0 pc=0x652b60
github.com/grpc-ecosystem/go-grpc-middleware/auth.UnaryServerInterceptor.func1(0x98dd98, 0x3893580, 0x83cd40, 0x3860880, 0x38944d0, 0x38944e0, 0x7de558, 0x38944f0, 0x0, 0x38944d0)
	/go/pkg/mod/github.com/grpc-ecosystem/[email protected]/auth/auth.go:47 +0xd4 fp=0x3847d0c sp=0x3847ce0 pc=0x6d7758
github.com/grpc-ecosystem/go-grpc-middleware.ChainUnaryServer.func1.1.1(0x98dd98, 0x3893580, 0x83cd40, 0x3860880, 0x38906c0, 0x71ce000, 0x383c960, 0x647248)
	/go/pkg/mod/github.com/grpc-ecosystem/[email protected]/chain.go:25 +0x4c fp=0x3847d38 sp=0x3847d0c pc=0x6d6c80
github.com/grpc-ecosystem/go-grpc-middleware.ChainUnaryServer.func1(0x98dd98, 0x3893580, 0x83cd40, 0x3860880, 0x38944d0, 0x38944e0, 0x612d04, 0x80caa8, 0x3893580, 0x807b90)
	/go/pkg/mod/github.com/grpc-ecosystem/[email protected]/chain.go:34 +0xbc fp=0x3847d6c sp=0x3847d38 pc=0x6d6e10
github.com/snowzach/doods/odrpc._Odrpc_Detect_Handler(0x808a90, 0x3894510, 0x98dd98, 0x3893580, 0x38906c0, 0x380c2c0, 0x98dd98, 0x3893580, 0x71ce000, 0x1575b)
	/build/odrpc/rpc.pb.go:1052 +0x108 fp=0x3847da0 sp=0x3847d6c pc=0x6472ac
google.golang.org/grpc.(*Server).processUnaryRPC(0x392e000, 0x990b30, 0x38f2910, 0x38f01b0, 0x380c5e0, 0xe50834, 0x0, 0x0, 0x0)
	/go/pkg/mod/google.golang.org/[email protected]/server.go:1210 +0x468 fp=0x3847f1c sp=0x3847da0 pc=0x612d4c
google.golang.org/grpc.(*Server).handleStream(0x392e000, 0x990b30, 0x38f2910, 0x38f01b0, 0x0)
	/go/pkg/mod/google.golang.org/[email protected]/server.go:1533 +0xa88 fp=0x3847fb0 sp=0x3847f1c pc=0x61631c
google.golang.org/grpc.(*Server).serveStreams.func1.2(0x389ce90, 0x392e000, 0x990b30, 0x38f2910, 0x38f01b0)
	/go/pkg/mod/google.golang.org/[email protected]/server.go:871 +0x84 fp=0x3847fd4 sp=0x3847fb0 pc=0x621424
created by google.golang.org/grpc.(*Server).serveStreams.func1
	/go/pkg/mod/google.golang.org/[email protected]/server.go:869 +0x1b8

goroutine 95 [select]:
google.golang.org/grpc/internal/transport.(*serverHandlerTransport).runStream(0x39f2a50)
	/go/pkg/mod/google.golang.org/[email protected]/internal/transport/handler_server.go:426 +0x8c
google.golang.org/grpc/internal/transport.(*serverHandlerTransport).HandleStreams(0x39f2a50, 0x3a0bdc0, 0x8933c4)
	/go/pkg/mod/google.golang.org/[email protected]/internal/transport/handler_server.go:416 +0x430
google.golang.org/grpc.(*Server).serveStreams(0x392e000, 0x990b30, 0x39f2a50)
	/go/pkg/mod/google.golang.org/[email protected]/server.go:855 +0xe4
google.golang.org/grpc.(*Server).ServeHTTP(0x392e000, 0x98c158, 0x39f4558, 0x39fec00)
	/go/pkg/mod/google.golang.org/[email protected]/server.go:924 +0xcc
github.com/snowzach/doods/server.New.func3(0x98c158, 0x39f4558, 0x39fec00)
	/build/server/server.go:183 +0x50
net/http.HandlerFunc.ServeHTTP(0x380e210, 0x98c158, 0x39f4558, 0x39fec00)
	/usr/local/go/src/net/http/server.go:2012 +0x34
golang.org/x/net/http2.(*serverConn).runHandler(0x3a16380, 0x39f4558, 0x39fec00, 0x38b4710)
	/go/pkg/mod/golang.org/x/[email protected]/http2/server.go:2152 +0x74
created by golang.org/x/net/http2.(*serverConn).processHeaders
	/go/pkg/mod/golang.org/x/[email protected]/http2/server.go:1882 +0x418

goroutine 137 [select]:
google.golang.org/grpc/internal/transport.(*serverHandlerTransport).HandleStreams.func1(0x39f1280, 0x39f2c80, 0x39f4678)
	/go/pkg/mod/google.golang.org/[email protected]/internal/transport/handler_server.go:344 +0xa4
created by google.golang.org/grpc/internal/transport.(*serverHandlerTransport).HandleStreams
	/go/pkg/mod/google.golang.org/[email protected]/internal/transport/handler_server.go:343 +0xac

goroutine 100 [select]:
google.golang.org/grpc/internal/transport.(*Stream).waitOnHeader(0x389ea20)
	/go/pkg/mod/google.golang.org/[email protected]/internal/transport/transport.go:321 +0x94
google.golang.org/grpc/internal/transport.(*Stream).RecvCompress(...)
	/go/pkg/mod/google.golang.org/[email protected]/internal/transport/transport.go:336
google.golang.org/grpc.(*csAttempt).recvMsg(0x38bc5a0, 0x838da0, 0x74703e0, 0x0, 0xffffffff, 0x0)
	/go/pkg/mod/google.golang.org/[email protected]/stream.go:894 +0x540
google.golang.org/grpc.(*clientStream).RecvMsg.func1(0x38bc5a0, 0x128c6, 0x0)
	/go/pkg/mod/google.golang.org/[email protected]/stream.go:759 +0x34
google.golang.org/grpc.(*clientStream).withRetry(0x394a1e0, 0x39c3448, 0x39c3430, 0xafa6609c, 0x38103a8)
	/go/pkg/mod/google.golang.org/[email protected]/stream.go:617 +0x60
google.golang.org/grpc.(*clientStream).RecvMsg(0x394a1e0, 0x838da0, 0x74703e0, 0x0, 0x0)
	/go/pkg/mod/google.golang.org/[email protected]/stream.go:758 +0xcc
google.golang.org/grpc.invoke(0x98dd98, 0x7470380, 0x861371, 0x13, 0x83cd40, 0x397aa80, 0x838da0, 0x74703e0, 0x383cf00, 0x390e430, ...)
	/go/pkg/mod/google.golang.org/[email protected]/call.go:73 +0xdc
google.golang.org/grpc.(*ClientConn).Invoke(0x383cf00, 0x98dd98, 0x7470380, 0x861371, 0x13, 0x83cd40, 0x397aa80, 0x838da0, 0x74703e0, 0x390e430, ...)
	/go/pkg/mod/google.golang.org/[email protected]/call.go:37 +0x108
github.com/snowzach/doods/odrpc.(*odrpcClient).Detect(0x39010d0, 0x98dd98, 0x7470380, 0x397aa80, 0x390e430, 0x2, 0x2, 0x98dd98, 0x7470380, 0x0)
	/build/odrpc/rpc.pb.go:967 +0x88
github.com/snowzach/doods/odrpc.request_Odrpc_Detect_0(0x98dd98, 0x7470380, 0x98e870, 0xeb1f48, 0x98bc58, 0x39010d0, 0x38ba780, 0x74702c0, 0x0, 0x74482d58, ...)
	/build/odrpc/rpc.pb.gw.go:62 +0x29c
github.com/snowzach/doods/odrpc.RegisterOdrpcHandlerClient.func2(0x9d32e038, 0x74702a0, 0x38ba780, 0x74702c0)
	/build/odrpc/rpc.pb.gw.go:289 +0x134
github.com/grpc-ecosystem/grpc-gateway/runtime.(*ServeMux).ServeHTTP(0x391a340, 0x9d32e038, 0x74702a0, 0x38ba780)
	/go/pkg/mod/github.com/grpc-ecosystem/[email protected]/runtime/mux.go:240 +0x8d0
github.com/snowzach/doods/server.(*Server).ListenAndServe.func2(0x9d32e038, 0x74702a0, 0x38ba780)
	/build/server/server.go:268 +0x34
terminate called recursively
net/http.HandlerFunc.ServeHTTP(...)
	/usr/local/go/src/net/http/server.go:2012
github.com/go-chi/chi.(*Mux).routeHTTP(0x386a030, 0x9d32e038, 0x74702a0, 0x38ba780)
	/go/pkg/mod/github.com/go-chi/[email protected]/mux.go:437 +0x1f4
net/http.HandlerFunc.ServeHTTP(0x3900928, 0x9d32e038, 0x74702a0, 0x38ba780)
	/usr/local/go/src/net/http/server.go:2012 +0x34
github.com/go-chi/cors.(*Cors).Handler.func1(0x9d32e038, 0x74702a0, 0x38ba780)
	/go/pkg/mod/github.com/go-chi/[email protected]/cors.go:228 +0x17c
net/http.HandlerFunc.ServeHTTP(0x380e220, 0x9d32e038, 0x74702a0, 0x38ba780)
	/usr/local/go/src/net/http/server.go:2012 +0x34
github.com/snowzach/doods/server.New.func2.1(0x98c678, 0x389e900, 0x38ba780)
	/build/server/server.go:114 +0x1a8
net/http.HandlerFunc.ServeHTTP(0x380e230, 0x98c678, 0x389e900, 0x38ba780)
	/usr/local/go/src/net/http/server.go:2012 +0x34
github.com/go-chi/render.SetContentType.func1.1(0x98c678, 0x389e900, 0x38ba700)
	/go/pkg/mod/github.com/go-chi/[email protected]/content_type.go:52 +0x138
net/http.HandlerFunc.ServeHTTP(0x380e240, 0x98c678, 0x389e900, 0x38ba700)
	/usr/local/go/src/net/http/server.go:2012 +0x34
github.com/go-chi/chi/middleware.Recoverer.func1(0x98c678, 0x389e900, 0x38ba700)
	/go/pkg/mod/github.com/go-chi/[email protected]/middleware/recoverer.go:37 +0x6c
net/http.HandlerFunc.ServeHTTP(0x380e250, 0x98c678, 0x389e900, 0x38ba700)
	/usr/local/go/src/net/http/server.go:2012 +0x34
github.com/go-chi/chi/middleware.RequestID.func1(0x98c678, 0x389e900, 0x38ba680)
	/go/pkg/mod/github.com/go-chi/[email protected]/middleware/request_id.go:76 +0x168
net/http.HandlerFunc.ServeHTTP(0x380e260, 0x98c678, 0x389e900, 0x38ba680)
	/usr/local/go/src/net/http/server.go:2012 +0x34
github.com/go-chi/chi.(*Mux).ServeHTTP(0x386a030, 0x98c678, 0x389e900, 0x38ba600)
	/go/pkg/mod/github.com/go-chi/[email protected]/mux.go:86 +0x220
github.com/snowzach/doods/server.New.func3(0x98c678, 0x389e900, 0x38ba600)
	/build/server/server.go:185 +0x7c
net/http.HandlerFunc.ServeHTTP(0x380e210, 0x98c678, 0x389e900, 0x38ba600)
	/usr/local/go/src/net/http/server.go:2012 +0x34
golang.org/x/net/http2/h2c.h2cHandler.ServeHTTP(0x986750, 0x380e210, 0x386a510, 0x98c678, 0x389e900, 0x38ba600)
	/go/pkg/mod/golang.org/x/[email protected]/http2/h2c/h2c.go:98 +0x274
net/http.serverHandler.ServeHTTP(0x39122d0, 0x98c678, 0x389e900, 0x38ba600)
	/usr/local/go/src/net/http/server.go:2807 +0x88
net/http.(*conn).serve(0x38a4360, 0x98dd38, 0x7470100)
	/usr/local/go/src/net/http/server.go:1895 +0x7d4
created by net/http.(*Server).Serve
	/usr/local/go/src/net/http/server.go:2933 +0x2d0

goroutine 103 [select]:
google.golang.org/grpc/internal/transport.(*serverHandlerTransport).runStream(0x38f2b40)
	/go/pkg/mod/google.golang.org/[email protected]/internal/transport/handler_server.go:426 +0x8c
google.golang.org/grpc/internal/transport.(*serverHandlerTransport).HandleStreams(0x38f2b40, 0x3893bc0, 0x8933c4)
	/go/pkg/mod/google.golang.org/[email protected]/internal/transport/handler_server.go:416 +0x430
google.golang.org/grpc.(*Server).serveStreams(0x392e000, 0x990b30, 0x38f2b40)
	/go/pkg/mod/google.golang.org/[email protected]/server.go:855 +0xe4
google.golang.org/grpc.(*Server).ServeHTTP(0x392e000, 0x98c158, 0x3896920, 0x38f4680)
	/go/pkg/mod/google.golang.org/[email protected]/server.go:924 +0xcc
github.com/snowzach/doods/server.New.func3(0x98c158, 0x3896920, 0x38f4680)
	/build/server/server.go:183 +0x50
net/http.HandlerFunc.ServeHTTP(0x380e210, 0x98c158, 0x3896920, 0x38f4680)
	/usr/local/go/src/net/http/server.go:2012 +0x34
golang.org/x/net/http2.(*serverConn).runHandler(0x3a16380, 0x3896920, 0x38f4680, 0x3894660)
	/go/pkg/mod/golang.org/x/[email protected]/http2/server.go:2152 +0x74
created by golang.org/x/net/http2.(*serverConn).processHeaders
	/go/pkg/mod/golang.org/x/[email protected]/http2/server.go:1882 +0x418

goroutine 134 [IO wait]:
internal/poll.runtime_pollWait(0x9ddc7aa8, 0x72, 0xffffffff)
	/usr/local/go/src/runtime/netpoll.go:203 +0x44
internal/poll.(*pollDesc).wait(0x39f2b04, 0x72, 0x0, 0x1, 0xffffffff)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:87 +0x30
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:92
internal/poll.(*FD).Read(0x39f2af0, 0x727e00d, 0x1, 0x1, 0x0, 0x0, 0x0)
	/usr/local/go/src/internal/poll/fd_unix.go:169 +0x174
net.(*netFD).Read(0x39f2af0, 0x727e00d, 0x1, 0x1, 0x1, 0xabcd8, 0x39be280)
	/usr/local/go/src/net/fd_unix.go:202 +0x38
net.(*conn).Read(0x39f4598, 0x727e00d, 0x1, 0x1, 0x0, 0x0, 0x0)
	/usr/local/go/src/net/net.go:184 +0x64
net/http.(*connReader).backgroundRead(0x727e000)
	/usr/local/go/src/net/http/server.go:678 +0x44
created by net/http.(*connReader).startBackgroundRead
	/usr/local/go/src/net/http/server.go:674 +0xc0

goroutine 106 [syscall]:
github.com/tensorflow/tensorflow/tensorflow/go._Cfunc_TF_SessionRun(0x8ed8d548, 0x0, 0x3810560, 0x3810568, 0x1, 0x7470a00, 0x390ea20, 0x4, 0x0, 0x0, ...)
	_cgo_gotypes.go:1452 +0x30 fp=0x38488ac sp=0x3848898 pc=0x67663c
github.com/tensorflow/tensorflow/tensorflow/go.(*Session).Run.func1(0x3902740, 0x397abc0, 0x3848ab4, 0x3848aec, 0x4, 0x4, 0x0, 0x0, 0x0, 0x3810570)
	/go/pkg/mod/github.com/tensorflow/[email protected]+incompatible/tensorflow/go/session.go:149 +0x210 fp=0x3848900 sp=0x38488ac pc=0x685158
github.com/tensorflow/tensorflow/tensorflow/go.(*Session).Run(0x3902740, 0x3848ab4, 0x3848aec, 0x4, 0x4, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	/go/pkg/mod/github.com/tensorflow/[email protected]+incompatible/tensorflow/go/session.go:149 +0x14c fp=0x3848948 sp=0x3848900 pc=0x67d0e8
github.com/snowzach/doods/detector/tensorflow.(*detector).Detect(0x388f3c0, 0x98dd98, 0x74708a0, 0x397ab40, 0x0, 0x0, 0x0)
	/build/detector/tensorflow/tensorflow.go:187 +0xb18 fp=0x3848c88 sp=0x3848948 pc=0x689cbc
github.com/snowzach/doods/detector.(*Mux).Detect(0x3894510, 0x98dd98, 0x74708a0, 0x397ab40, 0x3894510, 0x974c01, 0x9ddcc430)
	/build/detector/detector.go:130 +0xb4 fp=0x3848cc0 sp=0x3848c88 pc=0x69c090
github.com/snowzach/doods/odrpc._Odrpc_Detect_Handler.func1(0x98dd98, 0x74708a0, 0x83cd40, 0x397ab40, 0x13, 0x98dd98, 0x74708a0, 0x0)
	/build/odrpc/rpc.pb.go:1050 +0x68 fp=0x3848ce0 sp=0x3848cc0 pc=0x652b60
github.com/grpc-ecosystem/go-grpc-middleware/auth.UnaryServerInterceptor.func1(0x98dd98, 0x74708a0, 0x83cd40, 0x397ab40, 0x390e9a0, 0x390e9b0, 0x7de558, 0x390e9c0, 0x0, 0x390e9a0)
	/go/pkg/mod/github.com/grpc-ecosystem/[email protected]/auth/auth.go:47 +0xd4 fp=0x3848d0c sp=0x3848ce0 pc=0x6d7758
github.com/grpc-ecosystem/go-grpc-middleware.ChainUnaryServer.func1.1.1(0x98dd98, 0x74708a0, 0x83cd40, 0x397ab40, 0x391e930, 0x7834000, 0x3a14000, 0x647248)
	/go/pkg/mod/github.com/grpc-ecosystem/[email protected]/chain.go:25 +0x4c fp=0x3848d38 sp=0x3848d0c pc=0x6d6c80
github.com/grpc-ecosystem/go-grpc-middleware.ChainUnaryServer.func1(0x98dd98, 0x74708a0, 0x83cd40, 0x397ab40, 0x390e9a0, 0x390e9b0, 0x612d04, 0x80caa8, 0x74708a0, 0x807b90)
	/go/pkg/mod/github.com/grpc-ecosystem/[email protected]/chain.go:34 +0xbc fp=0x3848d6c sp=0x3848d38 pc=0x6d6e10
github.com/snowzach/doods/odrpc._Odrpc_Detect_Handler(0x808a90, 0x3894510, 0x98dd98, 0x74708a0, 0x391e930, 0x380c2c0, 0x98dd98, 0x74708a0, 0x7834000, 0x30074)
	/build/odrpc/rpc.pb.go:1052 +0x108 fp=0x3848da0 sp=0x3848d6c pc=0x6472ac
google.golang.org/grpc.(*Server).processUnaryRPC(0x392e000, 0x990b30, 0x38f2b40, 0x38f0480, 0x380c5e0, 0xe50834, 0x0, 0x0, 0x0)
	/go/pkg/mod/google.golang.org/[email protected]/server.go:1210 +0x468 fp=0x3848f1c sp=0x3848da0 pc=0x612d4c
google.golang.org/grpc.(*Server).handleStream(0x392e000, 0x990b30, 0x38f2b40, 0x38f0480, 0x0)
	/go/pkg/mod/google.golang.org/[email protected]/server.go:1533 +0xa88 fp=0x3848fb0 sp=0x3848f1c pc=0x61631c
google.golang.org/grpc.(*Server).serveStreams.func1.2(0x389cfa4, 0x392e000, 0x990b30, 0x38f2b40, 0x38f0480)
	/go/pkg/mod/google.golang.org/[email protected]/server.go:871 +0x84 fp=0x3848fd4 sp=0x3848fb0 pc=0x621424
created by google.golang.org/grpc.(*Server).serveStreams.func1
	/go/pkg/mod/google.golang.org/[email protected]/server.go:869 +0x1b8

goroutine 109 [select]:
google.golang.org/grpc/internal/transport.(*serverHandlerTransport).runStream(0x39f2c80)
	/go/pkg/mod/google.golang.org/[email protected]/internal/transport/handler_server.go:426 +0x8c
google.golang.org/grpc/internal/transport.(*serverHandlerTransport).HandleStreams(0x39f2c80, 0x727e580, 0x8933c4)
	/go/pkg/mod/google.golang.org/[email protected]/internal/transport/handler_server.go:416 +0x430
google.golang.org/grpc.(*Server).serveStreams(0x392e000, 0x990b30, 0x39f2c80)
	/go/pkg/mod/google.golang.org/[email protected]/server.go:855 +0xe4
google.golang.org/grpc.(*Server).ServeHTTP(0x392e000, 0x98c158, 0x3896998, 0x38f4880)
	/go/pkg/mod/google.golang.org/[email protected]/server.go:924 +0xcc
github.com/snowzach/doods/server.New.func3(0x98c158, 0x3896998, 0x38f4880)
	/build/server/server.go:183 +0x50
net/http.HandlerFunc.ServeHTTP(0x380e210, 0x98c158, 0x3896998, 0x38f4880)
	/usr/local/go/src/net/http/server.go:2012 +0x34
golang.org/x/net/http2.(*serverConn).runHandler(0x3a16380, 0x3896998, 0x38f4880, 0x3894750)
	/go/pkg/mod/golang.org/x/[email protected]/http2/server.go:2152 +0x74
created by golang.org/x/net/http2.(*serverConn).processHeaders
	/go/pkg/mod/golang.org/x/[email protected]/http2/server.go:1882 +0x418

goroutine 126 [select]:
google.golang.org/grpc/internal/transport.(*Stream).waitOnHeader(0x3912e10)
	/go/pkg/mod/google.golang.org/[email protected]/internal/transport/transport.go:321 +0x94
google.golang.org/grpc/internal/transport.(*Stream).RecvCompress(...)
	/go/pkg/mod/google.golang.org/[email protected]/internal/transport/transport.go:336
google.golang.org/grpc.(*csAttempt).recvMsg(0x393ce10, 0x838da0, 0x380dae0, 0x0, 0xffffffff, 0x0)
	/go/pkg/mod/google.golang.org/[email protected]/stream.go:894 +0x540
google.golang.org/grpc.(*clientStream).RecvMsg.func1(0x393ce10, 0x119b4, 0x0)
	/go/pkg/mod/google.golang.org/[email protected]/stream.go:759 +0x34
google.golang.org/grpc.(*clientStream).withRetry(0x3a501e0, 0x3a5f448, 0x3a5f430, 0xafa66ac8, 0x3a4d040)
	/go/pkg/mod/google.golang.org/[email protected]/stream.go:617 +0x60
google.golang.org/grpc.(*clientStream).RecvMsg(0x3a501e0, 0x838da0, 0x380dae0, 0x0, 0x0)
	/go/pkg/mod/google.golang.org/[email protected]/stream.go:758 +0xcc
google.golang.org/grpc.invoke(0x98dd98, 0x380da80, 0x861371, 0x13, 0x83cd40, 0x391ae00, 0x838da0, 0x380dae0, 0x383cf00, 0x380eb80, ...)
	/go/pkg/mod/google.golang.org/[email protected]/call.go:73 +0xdc
google.golang.org/grpc.(*ClientConn).Invoke(0x383cf00, 0x98dd98, 0x380da80, 0x861371, 0x13, 0x83cd40, 0x391ae00, 0x838da0, 0x380dae0, 0x380eb80, ...)
	/go/pkg/mod/google.golang.org/[email protected]/call.go:37 +0x108
github.com/snowzach/doods/odrpc.(*odrpcClient).Detect(0x39010d0, 0x98dd98, 0x380da80, 0x391ae00, 0x380eb80, 0x2, 0x2, 0x98dd98, 0x380da80, 0x0)
	/build/odrpc/rpc.pb.go:967 +0x88
github.com/snowzach/doods/odrpc.request_Odrpc_Detect_0(0x98dd98, 0x380da80, 0x98e870, 0xeb1f48, 0x98bc58, 0x39010d0, 0x392cc00, 0x380d9c0, 0x0, 0x74482d58, ...)
	/build/odrpc/rpc.pb.gw.go:62 +0x29c
github.com/snowzach/doods/odrpc.RegisterOdrpcHandlerClient.func2(0x9d32e038, 0x380d9a0, 0x392cc00, 0x380d9c0)
	/build/odrpc/rpc.pb.gw.go:289 +0x134
github.com/grpc-ecosystem/grpc-gateway/runtime.(*ServeMux).ServeHTTP(0x391a340, 0x9d32e038, 0x380d9a0, 0x392cc00)
	/go/pkg/mod/github.com/grpc-ecosystem/[email protected]/runtime/mux.go:240 +0x8d0
github.com/snowzach/doods/server.(*Server).ListenAndServe.func2(0x9d32e038, 0x380d9a0, 0x392cc00)
	/build/server/server.go:268 +0x34
net/http.HandlerFunc.ServeHTTP(...)
	/usr/local/go/src/net/http/server.go:2012
github.com/go-chi/chi.(*Mux).routeHTTP(0x386a030, 0x9d32e038, 0x380d9a0, 0x392cc00)
	/go/pkg/mod/github.com/go-chi/[email protected]/mux.go:437 +0x1f4
net/http.HandlerFunc.ServeHTTP(0x3900928, 0x9d32e038, 0x380d9a0, 0x392cc00)
	/usr/local/go/src/net/http/server.go:2012 +0x34
github.com/go-chi/cors.(*Cors).Handler.func1(0x9d32e038, 0x380d9a0, 0x392cc00)
	/go/pkg/mod/github.com/go-chi/[email protected]/cors.go:228 +0x17c
net/http.HandlerFunc.ServeHTTP(0x380e220, 0x9d32e038, 0x380d9a0, 0x392cc00)
	/usr/local/go/src/net/http/server.go:2012 +0x34
github.com/snowzach/doods/server.New.func2.1(0x98c678, 0x3912cf0, 0x392cc00)
	/build/server/server.go:114 +0x1a8
net/http.HandlerFunc.ServeHTTP(0x380e230, 0x98c678, 0x3912cf0, 0x392cc00)
	/usr/local/go/src/net/http/server.go:2012 +0x34
github.com/go-chi/render.SetContentType.func1.1(0x98c678, 0x3912cf0, 0x392cb80)
	/go/pkg/mod/github.com/go-chi/[email protected]/content_type.go:52 +0x138
net/http.HandlerFunc.ServeHTTP(0x380e240, 0x98c678, 0x3912cf0, 0x392cb80)
	/usr/local/go/src/net/http/server.go:2012 +0x34
github.com/go-chi/chi/middleware.Recoverer.func1(0x98c678, 0x3912cf0, 0x392cb80)
	/go/pkg/mod/github.com/go-chi/[email protected]/middleware/recoverer.go:37 +0x6c
net/http.HandlerFunc.ServeHTTP(0x380e250, 0x98c678, 0x3912cf0, 0x392cb80)
	/usr/local/go/src/net/http/server.go:2012 +0x34
github.com/go-chi/chi/middleware.RequestID.func1(0x98c678, 0x3912cf0, 0x392cb00)
	/go/pkg/mod/github.com/go-chi/[email protected]/middleware/request_id.go:76 +0x168
net/http.HandlerFunc.ServeHTTP(0x380e260, 0x98c678, 0x3912cf0, 0x392cb00)
	/usr/local/go/src/net/http/server.go:2012 +0x34
github.com/go-chi/chi.(*Mux).ServeHTTP(0x386a030, 0x98c678, 0x3912cf0, 0x392ca80)
	/go/pkg/mod/github.com/go-chi/[email protected]/mux.go:86 +0x220
github.com/snowzach/doods/server.New.func3(0x98c678, 0x3912cf0, 0x392ca80)
	/build/server/server.go:185 +0x7c
net/http.HandlerFunc.ServeHTTP(0x380e210, 0x98c678, 0x3912cf0, 0x392ca80)
	/usr/local/go/src/net/http/server.go:2012 +0x34
golang.org/x/net/http2/h2c.h2cHandler.ServeHTTP(0x986750, 0x380e210, 0x386a510, 0x98c678, 0x3912cf0, 0x392ca80)
	/go/pkg/mod/golang.org/x/[email protected]/http2/h2c/h2c.go:98 +0x274
net/http.serverHandler.ServeHTTP(0x39122d0, 0x98c678, 0x3912cf0, 0x392ca80)
	/usr/local/go/src/net/http/server.go:2807 +0x88
net/http.(*conn).serve(0x392ac60, 0x98dd38, 0x380d800)
	/usr/local/go/src/net/http/server.go:1895 +0x7d4
created by net/http.(*Server).Serve
	/usr/local/go/src/net/http/server.go:2933 +0x2d0

goroutine 139 [chan receive]:
github.com/snowzach/doods/detector/tensorflow.(*detector).Detect(0x388f3c0, 0x98dd98, 0x3893fe0, 0x3860c80, 0x0, 0x0, 0x0)
	/build/detector/tensorflow/tensorflow.go:112 +0x64
github.com/snowzach/doods/detector.(*Mux).Detect(0x3894510, 0x98dd98, 0x3893fe0, 0x3860c80, 0x3894510, 0x974c01, 0x9ddcc430)
	/build/detector/detector.go:130 +0xb4
github.com/snowzach/doods/odrpc._Odrpc_Detect_Handler.func1(0x98dd98, 0x3893fe0, 0x83cd40, 0x3860c80, 0x13, 0x98dd98, 0x3893fe0, 0x0)
	/build/odrpc/rpc.pb.go:1050 +0x68
github.com/grpc-ecosystem/go-grpc-middleware/auth.UnaryServerInterceptor.func1(0x98dd98, 0x3893fe0, 0x83cd40, 0x3860c80, 0x3894810, 0x3894820, 0x7de558, 0x3894830, 0x0, 0x3894810)
	/go/pkg/mod/github.com/grpc-ecosystem/[email protected]/auth/auth.go:47 +0xd4
github.com/grpc-ecosystem/go-grpc-middleware.ChainUnaryServer.func1.1.1(0x98dd98, 0x3893fe0, 0x83cd40, 0x3860c80, 0x3890930, 0x79e4000, 0x3a141e0, 0x647248)
	/go/pkg/mod/github.com/grpc-ecosystem/[email protected]/chain.go:25 +0x4c
github.com/grpc-ecosystem/go-grpc-middleware.ChainUnaryServer.func1(0x98dd98, 0x3893fe0, 0x83cd40, 0x3860c80, 0x3894810, 0x3894820, 0x612d04, 0x80caa8, 0x3893fe0, 0x807b90)
	/go/pkg/mod/github.com/grpc-ecosystem/[email protected]/chain.go:34 +0xbc
github.com/snowzach/doods/odrpc._Odrpc_Detect_Handler(0x808a90, 0x3894510, 0x98dd98, 0x3893fe0, 0x3890930, 0x380c2c0, 0x98dd98, 0x3893fe0, 0x79e4000, 0x119b4)
	/build/odrpc/rpc.pb.go:1052 +0x108
google.golang.org/grpc.(*Server).processUnaryRPC(0x392e000, 0x990b30, 0x39f2c80, 0x38a0c60, 0x380c5e0, 0xe50834, 0x0, 0x0, 0x0)
	/go/pkg/mod/google.golang.org/[email protected]/server.go:1210 +0x468
google.golang.org/grpc.(*Server).handleStream(0x392e000, 0x990b30, 0x39f2c80, 0x38a0c60, 0x0)
	/go/pkg/mod/google.golang.org/[email protected]/server.go:1533 +0xa88
google.golang.org/grpc.(*Server).serveStreams.func1.2(0x39fb240, 0x392e000, 0x990b30, 0x39f2c80, 0x38a0c60)
	/go/pkg/mod/google.golang.org/[email protected]/server.go:871 +0x84
created by google.golang.org/grpc.(*Server).serveStreams.func1
	/go/pkg/mod/google.golang.org/[email protected]/server.go:869 +0x1b8

trap    0x0
error   0x0
oldmask 0x0
r0      0x0
r1      0x9bdfa30c
r2      0x0
r3      0x8
r4      0x0
r5      0xb6fa0968
r6      0x9bdfa30c
r7      0xaf
r8      0x9bdfa7e8
r9      0x199
r10     0x0
fp      0x9bdfae40
ip      0xaf
sp      0x9bdfa300
lr      0xafb1a0af
pc      0xafb0c746
cpsr    0xc0030
fault   0x0

Issues with saving output files

So I have everything up and running, and it seams to work great, except when it comes to saving files.

Log output

`2020-01-03 13:43:50 ERROR (MainThread) [homeassistant.helpers.entity] Update for image_processing.doods_door fails
Traceback (most recent call last):
File "/usr/src/homeassistant/homeassistant/helpers/entity.py", line 270, in async_update_ha_state
await self.async_device_update()
File "/usr/src/homeassistant/homeassistant/helpers/entity.py", line 448, in async_device_update
await self.async_update()
File "/usr/src/homeassistant/homeassistant/components/image_processing/init.py", line 177, in async_update
await self.async_process_image(image.content)
File "/usr/local/lib/python3.7/concurrent/futures/thread.py", line 57, in run
result = self.fn(*self.args, **self.kwargs)
File "/usr/src/homeassistant/homeassistant/components/doods/image_processing.py", line 375, in process_image
self._save_image(image, matches, paths)
File "/usr/src/homeassistant/homeassistant/components/doods/image_processing.py", line 273, in _save_image
img.save(path)
File "/usr/local/lib/python3.7/site-packages/PIL/Image.py", line 2081, in save
fp = builtins.open(filename, "w+b")
FileNotFoundError: [Errno 2] No such file or directory: '/home/hass/.homeassistant/image/door_latest.jpg'
2020-01-03 13:43:53 WARNING (MainThread) [homeassistant.components.image_processing] Updating doods image_processing took longer than the scheduled update interval 0:00:01
2020-01-03 13:43:53 ERROR (MainThread) [homeassistant.helpers.entity] Update for image_processing.doods_door fails
Traceback (most recent call last):
File "/usr/src/homeassistant/homeassistant/helpers/entity.py", line 270, in async_update_ha_state
await self.async_device_update()
File "/usr/src/homeassistant/homeassistant/helpers/entity.py", line 448, in async_device_update
await self.async_update()
File "/usr/src/homeassistant/homeassistant/components/image_processing/init.py", line 177, in async_update
await self.async_process_image(image.content)
File "/usr/local/lib/python3.7/concurrent/futures/thread.py", line 57, in run
result = self.fn(*self.args, **self.kwargs)
File "/usr/src/homeassistant/homeassistant/components/doods/image_processing.py", line 375, in process_image
self._save_image(image, matches, paths)
File "/usr/src/homeassistant/homeassistant/components/doods/image_processing.py", line 273, in _save_image
img.save(path)
File "/usr/local/lib/python3.7/site-packages/PIL/Image.py", line 2081, in save
fp = builtins.open(filename, "w+b")
FileNotFoundError: [Errno 2] No such file or directory: '/home/hass/.homeassistant/image/door_latest.jpg'
2020-01-03 13:43:56 WARNING (MainThread) [homeassistant.components.image_processing] Updating doods image_processing took longer than the scheduled update interval 0:00:01
2020-01-03 13:43:57 WARNING (MainThread) [homeassistant.components.image_processing] Updating doods image_processing took longer than the scheduled update interval 0:00:01
2020-01-03 13:44:00 WARNING (MainThread) [homeassistant.components.image_processing] Updating doods image_processing took longer than the scheduled update interval 0:00:01
2020-01-03 13:44:03 WARNING (MainThread) [homeassistant.components.image_processing] Updating doods image_processing took longer than the scheduled update interval 0:00:01
2020-01-03 13:44:07 WARNING (MainThread) [homeassistant.components.image_processing] Updating doods image_processing took longer than the scheduled update interval 0:00:01
2020-01-03 13:44:10 ERROR (MainThread) [homeassistant.helpers.entity] Update for image_processing.doods_driveway fails
Traceback (most recent call last):
File "/usr/src/homeassistant/homeassistant/helpers/entity.py", line 270, in async_update_ha_state
await self.async_device_update()
File "/usr/src/homeassistant/homeassistant/helpers/entity.py", line 448, in async_device_update
await self.async_update()
File "/usr/src/homeassistant/homeassistant/components/image_processing/init.py", line 177, in async_update
await self.async_process_image(image.content)
File "/usr/local/lib/python3.7/concurrent/futures/thread.py", line 57, in run
result = self.fn(*self.args, **self.kwargs)
File "/usr/src/homeassistant/homeassistant/components/doods/image_processing.py", line 375, in process_image
self._save_image(image, matches, paths)
File "/usr/src/homeassistant/homeassistant/components/doods/image_processing.py", line 273, in _save_image
img.save(path)
File "/usr/local/lib/python3.7/site-packages/PIL/Image.py", line 2081, in save
fp = builtins.open(filename, "w+b")
FileNotFoundError: [Errno 2] No such file or directory: '/home/hass/.homeassistant/image/driveway_latest.jpg'
2020-01-03 13:44:11 WARNING (MainThread) [homeassistant.components.image_processing] Updating doods image_processing took longer than the scheduled update interval 0:00:01
2020-01-03 13:44:14 WARNING (MainThread) [homeassistant.components.image_processing] Updating doods image_processing took longer than the scheduled update interval 0:00:01
2020-01-03 13:44:15 WARNING (MainThread) [homeassistant.components.image_processing] Updating doods image_processing took longer than the scheduled update interval 0:00:01
2020-01-03 13:44:15 ERROR (MainThread) [homeassistant.helpers.entity] Update for image_processing.doods_garden_2 fails
Traceback (most recent call last):
File "/usr/src/homeassistant/homeassistant/helpers/entity.py", line 270, in async_update_ha_state
await self.async_device_update()
File "/usr/src/homeassistant/homeassistant/helpers/entity.py", line 448, in async_device_update
await self.async_update()
File "/usr/src/homeassistant/homeassistant/components/image_processing/init.py", line 177, in async_update
await self.async_process_image(image.content)
File "/usr/local/lib/python3.7/concurrent/futures/thread.py", line 57, in run
result = self.fn(*self.args, **self.kwargs)
File "/usr/src/homeassistant/homeassistant/components/doods/image_processing.py", line 375, in process_image
self._save_image(image, matches, paths)
File "/usr/src/homeassistant/homeassistant/components/doods/image_processing.py", line 273, in _save_image
img.save(path)
File "/usr/local/lib/python3.7/site-packages/PIL/Image.py", line 2081, in save
fp = builtins.open(filename, "w+b")
FileNotFoundError: [Errno 2] No such file or directory: '/home/hass/.homeassistant/image/garden_latest.jpg'
2020-01-03 13:44:19 WARNING (MainThread) [homeassistant.components.image_processing] Updating doods image_processing took longer than the scheduled update interval 0:00:01
2020-01-03 13:44:22 WARNING (MainThread) [homeassistant.components.image_processing] Updating doods image_processing took longer than the scheduled update interval 0:00:01
2020-01-03 13:44:22 ERROR (MainThread) [homeassistant.helpers.entity] Update for image_processing.doods_garden_2 fails
Traceback (most recent call last):
File "/usr/src/homeassistant/homeassistant/helpers/entity.py", line 270, in async_update_ha_state
await self.async_device_update()
File "/usr/src/homeassistant/homeassistant/helpers/entity.py", line 448, in async_device_update
await self.async_update()
File "/usr/src/homeassistant/homeassistant/components/image_processing/init.py", line 177, in async_update
await self.async_process_image(image.content)
File "/usr/local/lib/python3.7/concurrent/futures/thread.py", line 57, in run
result = self.fn(*self.args, **self.kwargs)
File "/usr/src/homeassistant/homeassistant/components/doods/image_processing.py", line 375, in process_image
self._save_image(image, matches, paths)
File "/usr/src/homeassistant/homeassistant/components/doods/image_processing.py", line 273, in _save_image
img.save(path)
File "/usr/local/lib/python3.7/site-packages/PIL/Image.py", line 2081, in save
fp = builtins.open(filename, "w+b")
FileNotFoundError: [Errno 2] No such file or directory: '/home/hass/.homeassistant/image/garden_latest.jpg'

Folder permission /home/hass/.homeasssistant/image
drwxr-xr-x 2 hass hass 4.0K Jan 3 13:38 image

Folder permission /tmp/
drwxr-xr-x 2 hass hass 4.0K Jan 3 12:53 tmp

Config (i have the same but with /tmp/ as dir)

`image_processing:

  • platform: doods
    scan_interval: 1
    url: http://192.168.0.113:8080
    timeout: 60
    detector: default
    source:
    • entity_id: camera.driveway
      file_out:
    • "/home/hass/.homeassistant/image/{{ camera_entity.split('.')[1] }}_latest.jpg"
    • "/home/hass/.homeassistant/image/{{ camera_entity.split('.')[1] }}{{ now().strftime('%Y%m%d%H%M%S') }}.jpg"
      labels:
    • name: person
    • car
    • truck
      confidence: 60`

Makefile errors on local machine

Build script tries to build Dockerfile.builder.${CONFIG} but there is only a single Dockerfile.builder without processor-specific extension

Docker files reference registry.prozach.org instead of local repo for built images

Using custom tflite model

Hi Dear, thank you for this excellent add-ons on home assistant. Working great for detecting person.

I wish to use a custom model and try to generate using colab at https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/tutorials/model_maker_image_classification.ipynb

Configuration:

- name: tflite2 type: tflite modelFile: /share/doods/p4/model_quant.tflite labelFile: /share/doods/p4/labels.txt

Getting the following error message.

2020-10-07T20:40:15.574Z ERROR detector/detector.go:73 Could not initialize detector tflite2: unsupported input tensor name: input_2 {"package": "detector"}

Could you please give me a guideline, how I can generate and use a custom model? Appreciate your time. Thank you.

How can I be sure that my Coral Google USB is being used?

Hello,

I am running this on Raspbery OS Lite on Rpi4 4GB.
Coral Google USB is recognized with "lsusb"
I have edited congif.yaml in my "deafult" detector to "hwaccel: True".
Coral Google USB is warm, but is there way to check in SW that it is being used intstead of RPi CPU?
My "process_time" each 2s is between 0,35 s to 0,45 s ( grabbing from rtsp source 1080p 15fps)

Thanks.

File Output

I'm able to utilize DOODS within hass.io, however, I'd prefer to utilize the docker directly to provide more flexibility. The hass.io implementation provides the ability to output the file to a local directory and it works great. I don't seem to see any documentation on how to complete this via an API call to the docker container.

Doods stops working after some time

My doods server using the coral stick running in docker/k8s stops working after some time. This means I can still get the detectors output but the detect method times out.

Killing/restarting the container fixed the problem. There was no corresponding log entry for these failed requests.

It would be cool to have some health check url which would do an actual image detection in order to see if it is working end2end. I will enable debug logs in order to debug this issue...

Implement Prometheus support

It would be great to get some metrics from this project. Like number of requests and duration... Maybe there are also metrics for the edge tpu?

mqtt support

any thoughts on adding mqtt support, kinda like how frigate does it ?

Sequence analyse

Is it possible to analyse a sequence of frames with doods and end up with an average. I raised the same question about another system, but I see the dilemma of for example a door opens, doods triggers and see nothing, but a person show up 2 seconds later. It would be great if doods could recheck / analyse x frames so it does not miss a close event like a person exists a car or door. Same with the car today it would just trigger about the car, but if it checked some more frames it would detect the car and the person exit it

Label variable in DOODS intergration doco for HA

HA doco for DOODS:
https://www.home-assistant.io/integrations/doods/

Tried utilizing DOODS on my HA instances, worked so far without adding label variable. When I looked at the doco for label variable in order to reduce detected object to save some resources, I thought there was a bit misleading on the following highlighted area.

image

I was expecting the following code in stead of the one shown in the sample configuration. Would you be able to confirm it please?

  • name: person
  • name: car
  • name: truck

If that's the case, would you be able to update the doco to reduce confusion in the future?

Thx so much for this amazing project. Love it!

cpu vs edgetpu detection speed

Hi,

I have trained my own tflite model based on ssd_mobilenet_v2 and it's working fine, however, i am barely seeing any improvement in detection speed when using edgetpu version and the usual one.

the model was that i am using for edgetpu is exactly the same model but converted to be used with official edgetpu_compiler.

while using those 2 models with a webcam script that i have on the same pi, i am seeing a drastic difference in persormance (1-2fps on cpu VS ~25fps on edgetpu) but this is what i get in doods apparently.

first log is cpu version:

2020-11-19T16:05:28.122Z	INFO	detector/detector.go:79	Configured Detector	{"package": "detector", "name": "default", "type": "tflite", "model": "models/jb_vb.tflite", "labels": 2, "width": 300, "height": 300}

2020-11-19T16:05:30.953Z	INFO	detector/detector.go:79	Configured Detector	{"package": "detector", "name": "default_edgetpu", "type": "tflite-edgetpu", "model": "models/mobilenet_ssd_v2_coco_quant_postprocess_edgetpu.tflite", "labels": 80, "width": 300, "height": 300}

2020-11-19T16:05:32.410Z	INFO	detector/detector.go:79	Configured Detector	{"package": "detector", "name": "tensorflow", "type": "tensorflow", "model": "models/faster_rcnn_inception_v2_coco_2018_01_28.pb", "labels": 65, "width": -1, "height": -1}

2020-11-19T16:05:32.413Z	INFO	server/server.go:284	API Listening	{"package": "server", "address": ":8080", "tls": false, "version": "v0.2.5-0-gbf6d7a1-dirty"}

2020-11-19T16:05:33.646Z	INFO	tflite/detector.go:393	Detection Complete	{"package": "detector.tflite", "name": "default", "id": "", "duration": 0.134194637, "detections": 5, "device": null}

2020-11-19T16:05:33.648Z	INFO	server/server.go:139	HTTP Request	{"status": 200, "took": 1.109035679, "request": "/detect", "method": "POST", "package": "server.request", "request-id": "aa09eeaee375/uh0x0gFssG-000001", "remote": "10.0.10.20:58828"}

2020-11-19T16:05:43.595Z	INFO	tflite/detector.go:393	Detection Complete	{"package": "detector.tflite", "name": "default", "id": "", "duration": 0.122232716, "detections": 5, "device": null}

2020-11-19T16:05:43.596Z	INFO	server/server.go:139	HTTP Request	{"status": 200, "took": 1.090841654, "request": "/detect", "method": "POST", "package": "server.request", "request-id": "aa09eeaee375/uh0x0gFssG-000002", "remote": "10.0.10.20:46924"}

2020-11-19T16:05:53.568Z	INFO	tflite/detector.go:393	Detection Complete	{"package": "detector.tflite", "name": "default", "id": "", "duration": 0.122930258, "detections": 5, "device": null}

2020-11-19T16:05:53.571Z	INFO	server/server.go:139	HTTP Request	{"status": 200, "took": 1.101649318, "request": "/detect", "method": "POST", "package": "server.request", "request-id": "aa09eeaee375/uh0x0gFssG-000003", "remote": "10.0.10.20:58892"}

2020-11-19T16:06:03.796Z	INFO	tflite/detector.go:393	Detection Complete	{"package": "detector.tflite", "name": "default", "id": "", "duration": 0.147751694, "detections": 5, "device": null}

2020-11-19T16:06:03.798Z	INFO	server/server.go:139	HTTP Request	{"status": 200, "took": 1.234236726, "request": "/detect", "method": "POST", "package": "server.request", "request-id": "aa09eeaee375/uh0x0gFssG-000004", "remote": "10.0.10.20:43926"}

2020-11-19T16:06:13.626Z	INFO	tflite/detector.go:393	Detection Complete	{"package": "detector.tflite", "name": "default", "id": "", "duration": 0.144718788, "detections": 6, "device": null}

2020-11-19T16:06:13.627Z	INFO	server/server.go:139	HTTP Request	{"status": 200, "took": 1.163688418, "request": "/detect", "method": "POST", "package": "server.request", "request-id": "aa09eeaee375/uh0x0gFssG-000005", "remote": "10.0.10.20:59324"}

and this is the edgetpu:

2020-11-19T15:56:58.723Z	INFO	detector/detector.go:79	Configured Detector	{"package": "detector", "name": "default", "type": "tflite-edgetpu", "model": "models/jb_vb_edgetpu.tflite", "labels": 2, "width": 300, "height": 300}

2020-11-19T15:56:58.784Z	INFO	detector/detector.go:79	Configured Detector	{"package": "detector", "name": "default_edgetpu", "type": "tflite-edgetpu", "model": "models/mobilenet_ssd_v2_coco_quant_postprocess_edgetpu.tflite", "labels": 80, "width": 300, "height": 300}

2020-11-19T15:57:00.237Z	INFO	detector/detector.go:79	Configured Detector	{"package": "detector", "name": "tensorflow", "type": "tensorflow", "model": "models/faster_rcnn_inception_v2_coco_2018_01_28.pb", "labels": 65, "width": -1, "height": -1}

2020-11-19T15:57:00.241Z	INFO	server/server.go:284	API Listening	{"package": "server", "address": ":8080", "tls": false, "version": "v0.2.5-0-gbf6d7a1-dirty"}

2020-11-19T15:57:02.895Z	INFO	tflite/detector.go:393	Detection Complete	{"package": "detector.tflite", "name": "default", "id": "", "duration": 0.02716341, "detections": 5, "device": {"Type":1,"Path":"/sys/bus/usb/devices/2-2"}}

2020-11-19T15:57:02.897Z	INFO	server/server.go:139	HTTP Request	{"status": 200, "took": 1.014697619, "request": "/detect", "method": "POST", "package": "server.request", "request-id": "aa09eeaee375/hmJsqN5QyH-000001", "remote": "10.0.10.20:56134"}

2020-11-19T15:57:12.697Z	INFO	tflite/detector.go:393	Detection Complete	{"package": "detector.tflite", "name": "default", "id": "", "duration": 0.008306565, "detections": 5, "device": {"Type":1,"Path":"/sys/bus/usb/devices/2-2"}}

2020-11-19T15:57:12.698Z	INFO	server/server.go:139	HTTP Request	{"status": 200, "took": 0.975427531, "request": "/detect", "method": "POST", "package": "server.request", "request-id": "aa09eeaee375/hmJsqN5QyH-000002", "remote": "10.0.10.20:43820"}

2020-11-19T15:57:22.755Z	INFO	tflite/detector.go:393	Detection Complete	{"package": "detector.tflite", "name": "default", "id": "", "duration": 0.008300731, "detections": 5, "device": {"Type":1,"Path":"/sys/bus/usb/devices/2-2"}}

2020-11-19T15:57:22.757Z	INFO	server/server.go:139	HTTP Request	{"status": 200, "took": 1.004568588, "request": "/detect", "method": "POST", "package": "server.request", "request-id": "aa09eeaee375/hmJsqN5QyH-000003", "remote": "10.0.10.20:58616"}

2020-11-19T15:57:32.754Z	INFO	tflite/detector.go:393	Detection Complete	{"package": "detector.tflite", "name": "default", "id": "", "duration": 0.00832712, "detections": 6, "device": {"Type":1,"Path":"/sys/bus/usb/devices/2-2"}}

2020-11-19T15:57:32.756Z	INFO	server/server.go:139	HTTP Request	{"status": 200, "took": 1.03227084, "request": "/detect", "method": "POST", "package": "server.request", "request-id": "aa09eeaee375/hmJsqN5QyH-000004", "remote": "10.0.10.20:45378"}

2020-11-19T15:57:42.750Z	INFO	tflite/detector.go:393	Detection Complete	{"package": "detector.tflite", "name": "default", "id": "", "duration": 0.008319898, "detections": 6, "device": {"Type":1,"Path":"/sys/bus/usb/devices/2-2"}}

2020-11-19T15:57:42.751Z	INFO	server/server.go:139	HTTP Request	{"status": 200, "took": 1.008720694, "request": "/detect", "method": "POST", "package": "server.request", "request-id": "aa09eeaee375/hmJsqN5QyH-000005", "remote": "10.0.10.20:34080"}

2020-11-19T15:57:52.709Z	INFO	tflite/detector.go:393	Detection Complete	{"package": "detector.tflite", "name": "default", "id": "", "duration": 0.009942682, "detections": 6, "device": {"Type":1,"Path":"/sys/bus/usb/devices/2-2"}}

2020-11-19T15:57:52.710Z	INFO	server/server.go:139	HTTP Request	{"status": 200, "took": 0.977224142, "request": "/detect", "method": "POST", "package": "server.request", "request-id": "aa09eeaee375/hmJsqN5QyH-000006", "remote": "10.0.10.20:46806"}

2020-11-19T15:58:02.865Z	INFO	tflite/detector.go:393	Detection Complete	{"package": "detector.tflite", "name": "default", "id": "", "duration": 0.0097695, "detections": 5, "device": {"Type":1,"Path":"/sys/bus/usb/devices/2-2"}}

is there something that i am missing? the detection speed is almost identical in both cases and there is very little increase in detection speed, let alone 10 times increase i had with those 2 models. that does not make any sense to me cause i know edgetpu model should run much faster with google edgetpu usb thingy...

i did of course allowed detection to run for hours in both cases, so the speed did not increase/decrease noticeably and still stayed in the range of 1 second.

edit:: i think it's worth saying that i am running this on raspberry pi4, 2GB version with the official docker image.

Health check url

It would be great if you would provide some health check url für my kubernetes health check...

Question: How to Limit Detection Region

Hello!
I am playing around with using Node-Red to pass the payload to the API for scans.
So far it has been working very well, but I need to cut off the left 10% of the detection region to prevent a false positive from a tree.
I know how to do this from Home Assistant, but was hoping there was a way to pass it in the payload from Node-Red.

This is my current testing payload:

msg.payload = {
    "detector_name": "tensorflow",
    "data": msg.image.toString('base64'),
    "detect": {"*": 50},
}
return msg;

msg.image is where I have a snapshot from a camera - and this works perfectly.

I have tried adding a plethora of things to get the DetectionRegion that I found in the rpc.pb.go and rpc.proto files.
I have tried DetectionRegions, DetectionRegion, Regions, Region - as new tags after detect, and also as tags inside the detect tag. I'm a novice and so haven't figured out from the source exactly where it is suppose to go... :-)

Can you please assist and let me know where in the payload I can put this to get the result of "left":0.1 to prevent the left 10% of the image from being in the detection range?

Or am I chasing a ghost and this isn't even supported in the API call?

Thanks very much!
DeadEnd

forgot to add... depending on what I do either the detection works without the limited region, or I get an error like this:

json: cannot unmarshal object into Go struct field DetectRequest.regions of type []*odrpc.DetectRegion

or

json: cannot unmarshal object into Go struct field DetectRequest.detect of type float32

I assume the first error I'm closer as I get an error about the regions - the second I think my yaml is not as correct.
Here is an example of what works, but doesn't seem to effect the detection region:

msg.payload = {
    "detector_name": "tensorflow",
    "data": msg.image.toString('base64'),
    "detect": {"*": 50},
    "region": {"left": 0.1}
}
return msg;

Allocation of 1440000 exceeds 10% of free system memory

Hello, seems to be a hardcoded memory allocation somewhere when using the "tensorflow" detector.

config.yaml: all default

{"detector_name":"tensorflow","file":"/home/user/Downloads/carsmall.jpg","detect":{"car":50}}
Results in...

##ARM7

tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 1440000 exceeds 10% of free system memory

##AMD64

2020-10-15 06:18:03.352413: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 30720000 exceeds 10% of free system memory.
2020-10-15 06:18:03.407720: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 23040000 exceeds 10% of free system memory.
2020-10-15 06:18:03.872055: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 45158400 exceeds 10% of free system memory.

Seemed to huddle through but eventually work on desktop. Crashed the arm.
carsmall

Am I right in assuming TensorFlow is much more accurate but slower the default?

USB transfer error 5 [LibUsbDataInCallback

Seemingly out of no where DooDs will not start any more... I get the following error

2019-11-09T11:19:43.200-0500	INFO	detector/detector.go:76	Configured Detector	{"package": "detector", "name": "default", "type": "tflite-edgetpu", "model": "/share/doods/mobilenet_ssd_v2_coco_quant_postprocess_edgetpu.tflite", "labels": 80, "width": 300, "height": 300}
2019-11-09T11:19:43.206-0500	INFO	server/server.go:273	API Listening	{"package": "server", "address": ":8082", "tls": false}
F :1129] HandleQueuedBulkIn transfer in failed. Not found: USB transfer error 5 [LibUsbDataInCallback]

curious if you've ever seen it ?

Question - What "ratio" is the detection result size?

I'm trying to work this out from testing and the code but not getting too far...

            "top": 0,
            "left": 0.05,
            "bottom": .8552,
            "right": 0.9441,

What "ratio" is this in/how do these positions relate to the original image? Is it a "percentage"? I.E right is 0.94% from the left of the image?

I'm trying to use cv2.drawrectangle to draw a bounding box around the detected object. Boxes are getting drawn, just not in the right place! :)

Doos as HA repository

I have managed to install DOODS through Home Assistant Add-on store repository https://github.com/snowzach/hassio-addons
My HA runs on VirtualBox on intel NUC i5 , is has enough PC resources to run good and it runs good.

I am also using Doods with Coral USB accelerator. And works fine, most of the time..
This is my configuration:
server:
port: '8080'
auth_key: ''
doods.detectors:

  • name: default
    type: tflite
    modelFile: /share/doods/models/ssd_mobilenet_v2_coco_quant_postprocess_edgetpu.tflite
    labelFile: /share/doods/models/coco_labels.txt
    numThreads: 1
    numConcurrent: 1
    hwAccel: true

Section from the log:
2021-01-18T10:24:39.495Z INFO tflite/detector.go:399 Detection Complete {"package": "detector.tflite", "name": "default", "id": "", "duration": 0.073187448, "detections": 20, "device": {"Type":1,"Path":"/sys/bus/usb/devices/2-1"}}
2021-01-18T10:24:39.496Z INFO server/server.go:139 HTTP Request {"status": 200, "took": 0.084034516, "request": "/detect", "method": "POST", "package": "server.request", "request-id": "d5f40609-doods/M2Bq3DJE8O-000152", "remote": "192.168.XX.XX:44530"}

But through the day I am getting these errors, and I do not know hot to minimize them as much as possible:

2021-01-17 20:15:43 WARNING (MainThread) [homeassistant.helpers.entity] Update of image_processing.doods_terasa is taking over 10 seconds
2021-01-17 20:15:45 WARNING (MainThread) [homeassistant.components.image_processing] Updating doods image_processing took longer than the scheduled update interval 0:00:03
2021-01-17 20:15:48 WARNING (MainThread) [homeassistant.components.image_processing] Updating doods image_processing took longer than the scheduled update interval 0:00:03
2021-01-17 20:15:51 WARNING (MainThread) [homeassistant.components.image_processing] Updating doods image_processing took longer than the scheduled update interval 0:00:03
2021-01-17 20:15:54 WARNING (MainThread) [homeassistant.components.image_processing] Updating doods image_processing took longer than the scheduled update interval 0:00:03
2021-01-17 20:15:57 WARNING (MainThread) [homeassistant.components.image_processing] Updating doods image_processing took longer than the scheduled update interval 0:00:03
2021-01-17 20:16:00 WARNING (MainThread) [homeassistant.components.image_processing] Updating doods image_processing took longer than the scheduled update interval 0:00:03
2021-01-17 20:16:03 WARNING (MainThread) [homeassistant.components.image_processing] Updating doods image_processing took longer than the scheduled update interval 0:00:03
2021-01-17 20:16:04 ERROR (MainThread) [homeassistant.helpers.entity] Update for image_processing.doods_terasa fails
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/urllib3/connectionpool.py", line 699, in urlopen
httplib_response = self._make_request(
File "/usr/local/lib/python3.8/site-packages/urllib3/connectionpool.py", line 445, in _make_request
six.raise_from(e, None)
File "", line 3, in raise_from
File "/usr/local/lib/python3.8/site-packages/urllib3/connectionpool.py", line 440, in _make_request
httplib_response = conn.getresponse()
File "/usr/local/lib/python3.8/http/client.py", line 1347, in getresponse
response.begin()
File "/usr/local/lib/python3.8/http/client.py", line 307, in begin
version, status, reason = self._read_status()
File "/usr/local/lib/python3.8/http/client.py", line 276, in _read_status
raise RemoteDisconnected("Remote end closed connection without"
http.client.RemoteDisconnected: Remote end closed connection without response
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/requests/adapters.py", line 439, in send
resp = conn.urlopen(
File "/usr/local/lib/python3.8/site-packages/urllib3/connectionpool.py", line 755, in urlopen
retries = retries.increment(
File "/usr/local/lib/python3.8/site-packages/urllib3/util/retry.py", line 531, in increment
raise six.reraise(type(error), error, _stacktrace)
File "/usr/local/lib/python3.8/site-packages/urllib3/packages/six.py", line 734, in reraise
raise value.with_traceback(tb)
File "/usr/local/lib/python3.8/site-packages/urllib3/connectionpool.py", line 699, in urlopen
httplib_response = self._make_request(
File "/usr/local/lib/python3.8/site-packages/urllib3/connectionpool.py", line 445, in _make_request
six.raise_from(e, None)
File "", line 3, in raise_from
File "/usr/local/lib/python3.8/site-packages/urllib3/connectionpool.py", line 440, in _make_request
httplib_response = conn.getresponse()
File "/usr/local/lib/python3.8/http/client.py", line 1347, in getresponse
response.begin()
File "/usr/local/lib/python3.8/http/client.py", line 307, in begin
version, status, reason = self._read_status()
File "/usr/local/lib/python3.8/http/client.py", line 276, in _read_status
raise RemoteDisconnected("Remote end closed connection without"
urllib3.exceptions.ProtocolError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/src/homeassistant/homeassistant/helpers/entity.py", line 278, in async_update_ha_state
await self.async_device_update()
File "/usr/src/homeassistant/homeassistant/helpers/entity.py", line 482, in async_device_update
await task
File "/usr/src/homeassistant/homeassistant/components/image_processing/init.py", line 132, in async_update
await self.async_process_image(image.content)
File "/usr/src/homeassistant/homeassistant/components/image_processing/init.py", line 112, in async_process_image
return await self.hass.async_add_executor_job(self.process_image, image)
File "/usr/local/lib/python3.8/concurrent/futures/thread.py", line 57, in run
result = self.fn(*self.args, **self.kwargs)
File "/usr/src/homeassistant/homeassistant/components/doods/image_processing.py", line 299, in process_image
response = self._doods.detect(
File "/usr/local/lib/python3.8/site-packages/pydoods/init.py", line 29, in detect
response = requests.post(
File "/usr/local/lib/python3.8/site-packages/requests/api.py", line 119, in post
return request('post', url, data=data, json=json, **kwargs)
File "/usr/local/lib/python3.8/site-packages/requests/api.py", line 61, in request
return session.request(method=method, url=url, **kwargs)
File "/usr/local/lib/python3.8/site-packages/requests/sessions.py", line 542, in request
resp = self.send(prep, **send_kwargs)
File "/usr/local/lib/python3.8/site-packages/requests/sessions.py", line 655, in send
r = adapter.send(request, **kwargs)
File "/usr/local/lib/python3.8/site-packages/requests/adapters.py", line 498, in send
raise ConnectionError(err, request=request)
requests.exceptions.ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))
raise ConnectionError(err, request=request)
requests.exceptions.ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))

Coral EdgeTPU dev board

Hello,

I've installed doods on my EdgeTPU dev board hoping it would make use of its HWAccel but it doesn't detect it.

ERROR detector/detector.go:73 Could not initialize detector edgetpu: no edgetpu devices detected {"package": "detector"}

Any help will be greatly appreciated!

Thanks

Expected memory / log rotate?

Is 2,5G of memory usage expected after some time running?
Is the log rotated by default in the container or could that be a culprit of the "high" usage?

addon does not seem to like hyper-v

19-11-27 13:54:09 ERROR (SyncWorker_28) [hassio.docker] Can't start addon_d5f40609_doods: 500 Server Error: Internal Server Error ("linux runtime spec devices: error gathering device information while adding custom device "/dev/bus/usb": no such file or directory")

When i run it under hyper-v, it seems to work fine under virtualbox.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.