Comments (19)
So your advice helped and I was able to get output video from it. Thank you so much for all the help!!!
from retinaface-tf2.
Hi,
Did you run all the installation steps ? the rcnn module that contains the bbox operations needs to be built by running "make"
from retinaface-tf2.
I did run make in command window, but im not sure about how to use the python code from your "USAGE" part...when i open project in PyCharm and type in the usage code, it is still missing those imports from my message above. But probably im doing something wrong while executing code. Can you tell me what is best way to execute it?
from retinaface-tf2.
Or maybe if you can tell which version of python is best to work with
from retinaface-tf2.
when i try to run my main.py script with your usage code this shows as error (but a did run "make" and it executed well)
from retinaface-tf2.
Did you modify the line 2 of bbox_transform.py ?
The line should be
from ..cython.bbox import bbox_overlaps_cython
instead of
from cython.bbox import bbox_overlaps_cython
from retinaface-tf2.
I did, but if i put it back like it was, it shows this:
which means it still cant impor bbox.pyx but when a put "make" to terminal it shows this
from retinaface-tf2.
from retinaface-tf2.
not sure if this helps but when a do everything agan from a scratch and run "make" this shows up
from retinaface-tf2.
Apparently it is working when i run : python detect.py --weights_path="./data/retinafaceweights.npy" --sample_img="./sample-images/t2.jpg" in command window, but still dont know how to use it in main.py file like you did...i think there will be some issue with python version in my computer or something with Pytorch. Can you tell me what is best way to open main.py file for calling detect ? step-by-step? thank you so so much
from retinaface-tf2.
The output of the make command looks OK
So you can confirm that running python detect.py --weights_path="./data/retinafaceweights.npy" --sample_img="./sample-images/t2.jpg"
generates the image output with correct bounging boxes ?
If running the python detect.py command works well, then the issue is related to how Pycharm python terminal (where you typed the usage code) searches for files / dependencies. I have never tried running python code in a Pycharm terminal. What I usually use is a tool called IPython. You can install it with pip ( pip install IPython
), and then open a console by running IPython
at the root of the project. There you should be able to type and execute the code. Let me know how it goes
from retinaface-tf2.
Hi im trying IPython, and my question is, when i type in the usage code, how can i see the result? i mean, when i type cv2.imshow("output",img) it start to loop and freeze and show nothing
from retinaface-tf2.
and also..is it possible to change your code so i can apply it on filevideostream?
from retinaface-tf2.
In order to show results, I would use cv2.imwrite function and save the output image to disk, as is done here :
Line 34 in 5f68ce8
It would be possible to apply the code to a video stream and run the algorithm on each frame of the video, yes, though I'm not sure the algorithm would be fast enough to process a video real time. That would depend on the size of the images from the video
from retinaface-tf2.
Is there any chance you could share with me code like that? cause as you can see im real begginer at cnn and programing at all...i dont need it to be fast or real time, just to apply algorithm on video an save that video in file...i tryed some changes here but i dont think its gonna work
from retinaface-tf2.
Hi its me again...so finally it looks like I made a code for applying your retinaface algorithm for video (see below) but when its done, i cant see any output file anywhere...any advice on that? Thanks
import cv2
import numpy as np
from absl import app, flags
from absl.flags import FLAGS
from retinaface import RetinaFace
import sys
import datetime
import os
import glob
flags.DEFINE_string('weights_path', './data/retinafaceweights.npy',
'network weights path')
flags.DEFINE_float('det_thresh', 0.9, "detection threshold")
flags.DEFINE_float('nms_thresh', 0.4, "nms threshold")
flags.DEFINE_bool('use_gpu_nms', True, "whether to use gpu for nms")
flags.DEFINE_string('video_path', './data/video_test.mp4',
'video path')
def _main(_argv):
cap = cv2.VideoCapture()
cap.open(FLAGS.video_path)
gpuid = 0
width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
size = (width,height)
fps = 24
frames = []
counter = 0
detector = RetinaFace(FLAGS.weights_path,FLAGS.use_gpu_nms, FLAGS.nms_thresh)
while True:
(rv, im) = cap.read() # im is a valid image if and only if rv is true
if not rv: #we reached the end of video file, we write the modified frames to a new video file
filename = '/data/output1.avi'
out = cv2.VideoWriter(filename, cv2.VideoWriter_fourcc(*'MJPG'), fps, size, True)
for i in range(len(frames)):
out.write(frames[i])
out.release()
break
counter = counter + 1
#im_shape = im.shape
#target_size = scales[0]
#max_size = scales[1]
#im_size_min = np.min(im_shape[0:2])
#im_size_max = np.max(im_shape[0:2])
#im_scale = float(target_size) / float(im_size_min)
#if np.round(im_scale * im_size_max) > max_size:
#im_scale = float(max_size) / float(im_size_max)
faces, landmarks = detector.detect(im, FLAGS.det_thresh)
if faces is not None:
for i in range(faces.shape[0]):
box = faces[i].astype(np.int)
color = (0, 0, 255)
cv2.rectangle(im, (box[0], box[1]), (box[2], box[3]), color, 2)
frames.append(im)
if counter % fps ==0:
print(counter)
cap.release()
if name == 'main':
try:
app.run(_main)
except SystemExit:
pass
from retinaface-tf2.
In the code you posted, which seems ok, the results are saved in the object frames, which is a list of images, that put together, form the images of the video. You can for instance use the cv2.videoWriter object to save these images in a video format, as is presented here :
https://theailearner.com/2018/10/15/creating-video-from-images-using-opencv-python/
But you would write :
video = cv2.VideoWriter(video_name, 0, 1, (width,height))
for frame in frames
video.write(frame)
cv2.destroyAllWindows()
video.release()
from retinaface-tf2.
So this code i used before is not good? Cause look pretty similar to me(see below) =D ..and i I use your code, is there no need to clarify codec for writer?
if not rv: #we reached the end of video file, we write the modified frames to a new video file
filename = '/data/output1.avi'
out = cv2.VideoWriter(filename, cv2.VideoWriter_fourcc(*'MJPG'), fps, size, True)
for i in range(len(frames)):
out.write(frames[i])
out.release()
break
from retinaface-tf2.
hi,
I can not run make command on windows.
Can you tell me how can I run that or is there any method to setup manually?
I want to use retinaface with PyCharm project
from retinaface-tf2.
Related Issues (13)
- Blur detected faces HOT 2
- About _resize_image function when preprocess HOT 1
- Use in windows enviroment HOT 3
- AP
- tflite HOT 5
- "Failed to interpret file './data/retinafaceweights.npy' as a pickle" => requirements.txt is not complete for reproducibility HOT 1
- Pre-trained model backbone arch HOT 1
- what does the setup.py file do? HOT 3
- what do the params `pixel_means` and `pixel_stds` mean? HOT 1
- Could you add a training file? HOT 1
- Same content 'requirements.txt' and 'requirements_gpu.txt' files
- What should I compile? HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from retinaface-tf2.