Giter Club home page Giter Club logo

Comments (24)

PawelPeczek-Roboflow avatar PawelPeczek-Roboflow commented on May 13, 2024

Hello there :) Thanks for raising an issue - that's a significant help to make the library better.

Regarding the nature of the problem itself.
The error:

[02/10/24 20:11:11] ERROR    Could not sent prediction with frame_id=3493 to sink due to error: OpenCV(4.8.0) /io/opencv/modules/highgui/src/window.cpp:1272:   sinks.py:252 error: (-2:Unspecified error) The function is not implemented. Rebuild the library with Windows, GTK+ 2.x or Cocoa support. If you are on Ubuntu or Debian, install libgtk2.0-dev and pkg-config, then re-run cmake or configure script in function 'cvShowImage'  

says that there is a problem with invocation of OpenCV function: cvShowImage, which can happen in several cases - including the headless version of OpenCV Python package installed or lack of underlying libraries. Besides that, issues with rendering image on the screen by OpenCV may happen when screen is not available (let's say we are connected to a device via SSH and run the script). First step would be to examine the Python env the inference was installed and checking if simple cv.imshow(IMAGE) works in your env.

I've found the following link with couple of suggested solutions: opencv/opencv-python#18

If your Raspberry Pi is not connected to any display device - an alternative approach would be to use different sink (other than render_boxes(...) which displays detections on the sink) - namely UDPSink or VideoFileSink. Here are links to docs:

from inference.

amankumarchagti avatar amankumarchagti commented on May 13, 2024

On using UDPSink:

# import the InferencePipeline interface
from inference import InferencePipeline
# import a built in sink called render_boxes (sinks are the logic that happens after inference)
from inference.core.interfaces.stream.sinks import render_boxes
from inference.core.interfaces.stream.sinks import UDPSink

udp_sink = UDPSink(ip_address="127.0.0.1", port=9094)
# create an inference pipeline object
pipeline = InferencePipeline.init(
    model_id="cow-lie-stand-walk/2", # set the model id to a yolov8x model with in put size 1280
    video_reference="rtsp://192.168.1.103:5543/live/channel0", # set the video reference (source of video), it can be a link/path to a video file, an RTSP stream url, or an integer representing a device id (usually 0 for built in webcams)
    on_prediction=udp_sink.send_predictions, # tell the pipeline object what to do with each set of inference by passing a function
    api_key="<API-KEY>", # provide your roboflow api key for loading models from the roboflow api
)
# start the pipeline
pipeline.start()
# wait for the pipeline to finish
pipeline.join()

get below error:

Traceback (most recent call last):
  File "/home/sekrop/Image-processing-main/AnimalDetection/roboflow_detection.py", line 9, in <module>
    udp_sink = UDPSink(ip_address="127.0.0.1", port=9094)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: UDPSink.__init__() missing 1 required positional argument: 'udp_socket'

from inference.

PawelPeczek-Roboflow avatar PawelPeczek-Roboflow commented on May 13, 2024

good point, we have a bug in docs. Fixed it on dev branch - will be released soon.

For the time being:
UDPSink.init(ip_address="127.0.0.1", port=9094)
is a way to go.

Please also make sure u have some UDP receiver active (one can be found here: https://github.com/roboflow/inference/blob/main/development/stream_interface/udp_receiver.py)

Sorry for inconvenience 😢

from inference.

PawelPeczek-Roboflow avatar PawelPeczek-Roboflow commented on May 13, 2024

Please let us know if that helped :)

from inference.

amankumarchagti avatar amankumarchagti commented on May 13, 2024

Yes, it helps, but how can I print predictions without Monitor over CLI?

from inference.

PawelPeczek-Roboflow avatar PawelPeczek-Roboflow commented on May 13, 2024

You have couple options:

  • if you are operating on the Jetson itself (not over ssh) you should be able to display prediction, given that inference is installed in an environment without headless open-cv and with all required underlying libs to display images
  • if you want to run the inference process on Jetson and then display predictions on different device - the setup would need to be more sophisticated:

Probably in PROD env u would like to build some logic on top of predictions, not video frames with boxes displayed (as those are less useful in automated procedure) - for debugging I would suggest operating directly at Jetson if that is feasible, if not - overlay using dummy sink I've pointed out

from inference.

amankumarchagti avatar amankumarchagti commented on May 13, 2024

first, I am using Raspberry Pi 4 model B (8GB) and connected to that over SSH and I am running below code.

# import the InferencePipeline interface
from inference import InferencePipeline
# import a built in sink called render_boxes (sinks are the logic that happens after inference)
from inference.core.interfaces.stream.sinks import render_boxes
from inference.core.interfaces.stream.sinks import UDPSink

udp_sink = UDPSink.init(ip_address="127.0.0.1", port=9094)
# create an inference pipeline object
pipeline = InferencePipeline.init(
    model_id="cow-lie-stand-walk/2", # set the model id to a yolov8x model with in put size 1280
    video_reference="rtsp://192.168.1.103:5543/live/channel0", # set the video reference (source of video), it can be a link/path to a video file, an RTSP stream url, or an integer representing a device id (usually 0 for built in webcams)
    on_prediction=udp_sink.send_predictions, # tell the pipeline object what to do with each set of inference by passing a function
    api_key="<API-KEY>", # provide your roboflow api key for loading models from the roboflow api
)
# start the pipeline
pipeline.start()
# wait for the pipeline to finish
pipeline.join()

now I want predictions on same machine on console/SSH session without monitor. How can I do that?

from inference.

PawelPeczek-Roboflow avatar PawelPeczek-Roboflow commented on May 13, 2024

you can dump predictions to video with our VideoSink, you can stream the predictions overlayed on video using rtsp server: https://github.com/roboflow/inference/blob/main/development/stream_interface/create_dummy_streams.py#L36, to stream in sink: https://github.com/roboflow/inference/blob/main/development/stream_interface/inference_pipeline_demo.py#L52 or you can connect directly to the console on the Raspberry given that some monitor is already there.

What you would like to achieve at the end of the day? Only visualise the predictions?

from inference.

amankumarchagti avatar amankumarchagti commented on May 13, 2024

Used VideoFrame to put an annotation on the image itself over RTSP Stream and print the predictions as a dictionary in the console
https://inference.roboflow.com/quickstart/create_a_custom_inference_pipeline_sink/#install-inference

used the following code:

from inference import InferencePipeline
from inference.core.interfaces.camera.entities import VideoFrame

# Import opencv to display our annotated images
import cv2
# Import supervision to help visualize our predictions
import supervision as sv

# create a simple box annotator to use in our custom sink
annotator = sv.BoxAnnotator()

def my_custom_sink(predictions: dict, video_frame: VideoFrame):
    #print predictions on console
    print(predictions)
    # get the text labels for each prediction
    labels = [p["class"] for p in predictions["predictions"]]
    # load our predictions into the Supervision Detections API
    detections = sv.Detections.from_inference(predictions)
    # annotate the frame using our supervision annotator, the video_frame, the predictions (as supervision Detections), and the prediction labels
    image = annotator.annotate(
        scene=video_frame.image.copy(), detections=detections, labels=labels
    )
    # display the annotated image
    cv2.imshow("Predictions", image)
    cv2.waitKey(1)

pipeline = InferencePipeline.init(
    model_id="test-model/1",
    video_reference="rtsp://192.168.0.100:5543/live/channel0",
    on_prediction=my_custom_sink,
    api_key=<API-KEY>,
)

pipeline.start()
pipeline.join()

from inference.

PawelPeczek-Roboflow avatar PawelPeczek-Roboflow commented on May 13, 2024

this will probably not work due to cv2.imshow("Predictions", image) - you cannot display image without display available

from inference.

amankumarchagti avatar amankumarchagti commented on May 13, 2024

In that case, we can print only predictions?

from inference.

PawelPeczek-Roboflow avatar PawelPeczek-Roboflow commented on May 13, 2024

if you really wish to have visualisation u should:

ffmpeg_process = open_ffmpeg_stream_process(stream_id=stream_id)
on_frame_rendered = partial(stream_prediction, ffmpeg_process=ffmpeg_process)
on_prediction = partial(
    render_boxes,
    display_statistics=enable_stats,
    on_frame_rendered=on_frame_rendered,
)

def stream_prediction(image: np.ndarray, ffmpeg_process: subprocess.Popen) -> None:
    ffmpeg_process.stdin.write(image[:, :, ::-1].astype(np.uint8).tobytes())


def open_ffmpeg_stream_process(stream_id: int) -> subprocess.Popen:
    args = (
        "ffmpeg -re -stream_loop -1 -f rawvideo -pix_fmt "
        "rgb24 -s 640x480 -i pipe:0 -pix_fmt yuv420p "
        f"-f rtsp -rtsp_transport tcp {STREAM_SERVER_URL}/predictions{stream_id}.stream"
    ).split()
    return subprocess.Popen(args, stdin=subprocess.PIPE)

and attach that sink to inference pipeline

STREAM_SERVER_URL should point to the URL of docker with RTSP stream server - then from that server you could get the stream with predictions overlayed.

from inference.

amankumarchagti avatar amankumarchagti commented on May 13, 2024

is there a way that I can show cv2.imshow("Predictions", image) using my laptop's screen in real time?
can you give me sink code for that?

from inference.

PawelPeczek-Roboflow avatar PawelPeczek-Roboflow commented on May 13, 2024

yes, you could do so:

  • prepare this config:
"protocols": ["tcp"]
"paths":
  "all": 
    "source": "publisher"

and save in some location.

  • run RTSP server on laptop:
docker run --rm --name rtsp_server -d -v {config_path}:/rtsp-simple-server.yml -p 8554:8554 aler9/rtsp-simple-server:v1.3.0
  • lets assume that your laptop has IP 192.168.0.1
  • run your infenece pipeline on raspbery (I assume you have ffmpeg and other required dependencies installed):
from inference import InferencePipeline
from functools import partial
from inference.core.interfaces.stream.sinks import render_boxes

def stream_prediction(image: np.ndarray, ffmpeg_process: subprocess.Popen) -> None:
    ffmpeg_process.stdin.write(image[:, :, ::-1].astype(np.uint8).tobytes())


def open_ffmpeg_stream_process() -> subprocess.Popen:
    args = (
        "ffmpeg -re -stream_loop -1 -f rawvideo -pix_fmt "
        "rgb24 -s 640x480 -i pipe:0 -pix_fmt yuv420p "
        f"-f rtsp -rtsp_transport tcp 192.168.0.1:8554/predictions.stream"
    ).split()
    return subprocess.Popen(args, stdin=subprocess.PIPE)

ffmpeg_process = open_ffmpeg_stream_process()
on_frame_rendered = partial(stream_prediction, ffmpeg_process=ffmpeg_process)
on_prediction = partial(
    render_boxes,
    display_statistics=True,
    on_frame_rendered=on_frame_rendered,
)

pipeline = InferencePipeline.init(
    model_id="test-model/1",
    video_reference="rtsp://192.168.0.100:5543/live/channel0",
    on_prediction=on_prediction,
    api_key=<API-KEY>,
)

pipeline.start()
pipeline.join()
  • then, your rtsp server should receive video with predictions overlay. You can get preview on your laptop using:
import cv2

from inference.core.interfaces.camera.utils import get_video_frames_generator
from inference.core.interfaces.camera.video_source import VideoSource

def preview() -> None:
    camera = VideoSource.init(video_reference="192.168.0.1:8554/predictions.stream")
    camera.start()
    for video_frame in get_video_frames_generator(video=camera, max_fps=max_fps):
        cv2.imshow("Stream", video_frame.image)
        _ = cv2.waitKey(1)
    cv2.destroyAllWindows()

Please remember that this is pseudo-code, you probably need to polish some things on your end, but this illustrates e2e how I get it to work on my end.

from inference.

amankumarchagti avatar amankumarchagti commented on May 13, 2024

I am getting this error:
image

image

Below is the code I am running.

  1. Camera IP: 192.168.1.100
  2. Raspberry Pi(on which roboflow code is running and camera is connected to): 192.168.1.103
  3. My Local Laptop(Windows 11): 192.168.1.101
  • Code running on Raspberry Pi:
from inference import InferencePipeline
from functools import partial
from inference.core.interfaces.stream.sinks import render_boxes
import numpy as np
import subprocess

def stream_prediction(image: np.ndarray, ffmpeg_process: subprocess.Popen) -> None:
    ffmpeg_process.stdin.write(image[:, :, ::-1].astype(np.uint8).tobytes())

def open_ffmpeg_stream_process() -> subprocess.Popen:
    args = (
        "ffmpeg -re -stream_loop -1 -f rawvideo -pix_fmt "
        "rgb24 -s 640x480 -i pipe:0 -pix_fmt yuv420p "
        f"-f rtsp -rtsp_transport tcp 192.168.1.101:8554/predictions.stream"
    ).split()
    return subprocess.Popen(args, stdin=subprocess.PIPE)

ffmpeg_process = open_ffmpeg_stream_process()
on_frame_rendered = partial(stream_prediction, ffmpeg_process=ffmpeg_process)
on_prediction = partial(
    render_boxes,
    display_statistics=True,
    on_frame_rendered=on_frame_rendered,
)

pipeline = InferencePipeline.init(
    model_id="cow-counting/1",
    video_reference="rtsp://192.168.1.100:5543/live/channel0",
    on_prediction=on_prediction,
    api_key="",
)

pipeline.start()
pipeline.join()
  • simple_rtsp_server.yml
"protocols": ["tcp"]
"paths":
  "all": 
    "source": "publisher"

Docker command:

docker run --rm --name rtsp_server -d -v 'C:\Users\amanc\Downloads\Image-processing-main (1)\Image-processing-main\rtsp_server\rtsp-simple-server.yml' -p 8554:8554 aler9/rtsp-simple-server:v1.3.0

Docker is up and running(verified)

  • Code I am running on my laptop
import cv2

from inference.core.interfaces.camera.utils import get_video_frames_generator
from inference.core.interfaces.camera.video_source import VideoSource

def preview() -> None:
    camera = VideoSource.init(video_reference="192.168.1.103:8554/predictions.stream")
    camera.start()
    for video_frame in get_video_frames_generator(video=camera, max_fps=max_fps):
        cv2.imshow("Stream", video_frame.image)
        _ = cv2.waitKey(1)
    cv2.destroyAllWindows()

When I run the above Python code on my laptop, it exits the shell(without any output/error) even if the code is running in Raspberry Pi. I am confirming if we need to use Raspberry Pi's IP or the Laptop's IP in the above code.

NOTE: Can you please verify if all IPs are fine in the Code? these are actual IPs in the code and I have mentioned all IPs at the start of the message.

Thank you

from inference.

PawelPeczek-Roboflow avatar PawelPeczek-Roboflow commented on May 13, 2024
  1. You need to run docker with RTSP server on your laptop (IP: 192.168.1.101)
  2. You need to configure the InferencePipeline on Raspbery to push stream onto 192.168.1.101:8554
  3. You need to wait on laptop with video preview till stream will be emitted (otherwise code ends without reading any frame) and correct video reference would be 192.168.1.101:8554/predictions.stream - the RTSP server

from inference.

PawelPeczek-Roboflow avatar PawelPeczek-Roboflow commented on May 13, 2024

There is, however significant issue with production of the stream.
Please check if that works on your raspbery: https://stackoverflow.com/questions/69379674/how-to-stream-cv2-videowriter-frames-to-and-rtsp-server
I mean the dummy example:

def open_ffmpeg_stream_process(self):
    args = (
        "ffmpeg -re -stream_loop -1 -f rawvideo -pix_fmt "
        "rgb24 -s 1920x1080 -i pipe:0 -pix_fmt yuv420p "
        "-f rtsp 192.168.1.101:8554/predictions.stream"
    ).split()
    return subprocess.Popen(args, stdin=subprocess.PIPE)


def capture_loop():
    ffmpeg_process = open_ffmpeg_stream_process()
    capture = cv2.VideoCapture(<video/stream>)    
    while True:
        grabbed, frame = capture.read()
        if not grabbed:
            break
        ffmpeg_process.stdin.write(frame.astype(np.uint8).tobytes())
    capture.release()
    ffmpeg_process.stdin.close()
    ffmpeg_process.wait()

from inference.

amankumarchagti avatar amankumarchagti commented on May 13, 2024

Tried this code on raspberry Pi, but giving error:

from inference import InferencePipeline
from functools import partial
from inference.core.interfaces.stream.sinks import render_boxes
import numpy as np
import subprocess

def stream_prediction(image: np.ndarray, ffmpeg_process: subprocess.Popen) -> None:
    ffmpeg_process.stdin.write(image[:, :, ::-1].astype(np.uint8).tobytes())


def open_ffmpeg_stream_process(self):
    args = (
        "ffmpeg -re -stream_loop -1 -f rawvideo -pix_fmt "
        "rgb24 -s 1920x1080 -i pipe:0 -pix_fmt yuv420p "
        "-f rtsp 192.168.1.101:8554/predictions.stream"
    ).split()
    return subprocess.Popen(args, stdin=subprocess.PIPE)


def capture_loop():
    ffmpeg_process = open_ffmpeg_stream_process()
    capture = cv2.VideoCapture("rtsp://192.168.0.100:5543/live/channel0")    
    while True:
        grabbed, frame = capture.read()
        if not grabbed:
            break
        ffmpeg_process.stdin.write(frame.astype(np.uint8).tobytes())
    capture.release()
    ffmpeg_process.stdin.close()
    ffmpeg_process.wait()

ffmpeg_process = open_ffmpeg_stream_process()
on_frame_rendered = partial(stream_prediction, ffmpeg_process=ffmpeg_process)
on_prediction = partial(
    render_boxes,
    display_statistics=True,
    on_frame_rendered=on_frame_rendered,
)

pipeline = InferencePipeline.init(
    model_id="test-model/1",
    video_reference="rtsp://192.168.0.100:5543/live/channel0",
    on_prediction=on_prediction,
    api_key="",
)

pipeline.start()
pipeline.join()

Error:

image

from inference.

PawelPeczek-Roboflow avatar PawelPeczek-Roboflow commented on May 13, 2024

self param - sorry

from inference.

PawelPeczek-Roboflow avatar PawelPeczek-Roboflow commented on May 13, 2024

in the

def open_ffmpeg_stream_process(self <- to be discarded):
    args = (
        "ffmpeg -re -stream_loop -1 -f rawvideo -pix_fmt "
        "rgb24 -s 1920x1080 -i pipe:0 -pix_fmt yuv420p "
        "-f rtsp 192.168.1.101:8554/predictions.stream"
    ).split()
    return subprocess.Popen(args, stdin=subprocess.PIPE)

from inference.

amankumarchagti avatar amankumarchagti commented on May 13, 2024

The Raspberry Pi Code is giving error:(Firewall is off on windows and R-pi both. FYI)

image

from inference.

PawelPeczek-Roboflow avatar PawelPeczek-Roboflow commented on May 13, 2024

the error message: connection to tcp://8554:544 is suspicious - as if there is lack of host in URI

Let's start from toy example, ok?

def open_ffmpeg_stream_process():
    args = (
        "ffmpeg -re -stream_loop -1 -f rawvideo -pix_fmt "
        "rgb24 -s 1920x1080 -i pipe:0 -pix_fmt yuv420p "
        "-f rtsp 192.168.1.101:8554/predictions.stream"
    ).split()
    return subprocess.Popen(args, stdin=subprocess.PIPE)


def capture_loop():
    ffmpeg_process = open_ffmpeg_stream_process()
    capture = cv2.VideoCapture(<video/stream>)    
    while True:
        grabbed, frame = capture.read()
        if not grabbed:
            break
        ffmpeg_process.stdin.write(frame.astype(np.uint8).tobytes())
    capture.release()
    ffmpeg_process.stdin.close()
    ffmpeg_process.wait()

just reading the stream from camera, decoding and sending into the the RTSP server

from inference.

PawelPeczek-Roboflow avatar PawelPeczek-Roboflow commented on May 13, 2024

there is plenty of possibilities why it does not work at your Jetson. Could you please next time send a dump of all error information in text, instead of screenshot?

from inference.

amankumarchagti avatar amankumarchagti commented on May 13, 2024

I am using Raspberry Pi. FYI(not jetson). I will share logs.

from inference.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.