Giter Club home page Giter Club logo

proctoring-ai's Introduction

Proctoring-AI

Project to create an automated proctoring system where the user can be monitored automatically through the webcam and microphone. The project is divided into two parts: vision and audio based functionalities. An explanation of some functionalities of the project can be found on my medium article.

Prerequisites

To run the programs in this repo, do the following:

  • create a virtual environment using the command:
    • python -m venv venv
  • activate the virtual environment
    • cd ./venv/Scripts/activate (windows users)
    • source ./venv/bin/activate (mac and linux users)
  • install the requirements
    • pip install --upgrade pip (to upgrade pip)
    • pip install -r requirements.txt

Once the requirements have been installed, The programs will run successfully. Except for the person_and_phone.py script which requires a model to be downloaded.

More on that later.

For vision:

Tensorflow>2
OpenCV
sklearn=0.19.1 # for face spoofing. 
The model used was trained with this version and does not support recent ones.

For audio:

pyaudio
speech_recognition
nltk

Vision

It has six vision based functionalities right now:

  1. Track eyeballs and report if candidate is looking left, right or up.
  2. Find if the candidate opens his mouth by recording the distance between lips at starting.
  3. Instance segmentation to count number of people and report if no one or more than one person detected.
  4. Find and report any instances of mobile phones.
  5. Head pose estimation to find where the person is looking.
  6. Face spoofing detection

Face detection

Earlier, Dlib's frontal face HOG detector was used to find faces. However, it did not give very good results. In face_detection different face detection models are compared and OpenCV's DNN module provides best result and the results are present in this article.

It is implemented in face_detector.py and is used for tracking eyes, mouth opening detection, head pose estimation, and face spoofing.

An additional quantized model is also added for face detector as described in Issue 14. This can be used by setting the parameter quantized as True when calling the get_face_detector(). On quick testing of face detector on my laptop the normal version gave ~17.5 FPS while the quantized version gave ~19.5 FPS. This would be especially useful when deploying on edge devices due to it being uint8 quantized.

Facial Landmarks

Earlier, Dlib's facial landmarks model was used but it did not give good results when face was at an angle. Now, a model provided in this repository is used. A comparison between them and the reason for choosing the new Tensorflow based model is shown in this article.

It is implemented in face_landmarks.py and is used for tracking eyes, mouth opening detection, and head pose estimation.

Note

If you want to use dlib models then checkout the old-master branch.

Eye tracking

eye_tracker.py is to track eyes. A detailed explanation is provided in this article. However, it was written using dlib.

eye tracking

Mouth Opening Detection

mouth_opening_detector.py is used to check if the candidate opens his/her mouth during the exam after recording it initially. It's explanation can be found in the main article, however, it is using dlib which can be easily changed to the new models.

Mouth opening detection

Person counting and mobile phone detection

person_and_phone.py is for counting persons and detecting mobile phones. YOLOv3 is used in Tensorflow 2 and it is explained in this article for more details.

person counting and phone detection

Head pose estimation

head_pose_estimation.py is used for finding where the head is facing. An explanation is provided in this article

head pose estimation

Face spoofing

face_spoofing.py is used for finding whether the face is real or a photograph or image. An explanation is provided in this article. The model and working is taken from this Github repo.

face spoofing

FPS obtained

Functionality On Intel i5
Eye Tracking 7.1
Mouth Detection 7.2
Person and Phone Detection 1.3
Head Pose Estimation 8.5
Face Spoofing 6.9

If you testing on a different processor a GPU consider making a pull request to add the FPS obtained on that processor.

Audio

It is divided into two parts:

  1. Audio from the microphone is recording and converted to text using Google's speech recognition API. A different thread is used to call the API such that the recording portion is not disturbed a lot, which processes the last one, appends its data to a text file and deletes it.
  2. NLTK we remove the stopwods from that file. The question paper (in txt format) is taken whose stopwords are also removed and their contents are compared. Finally, the common words along with its number are presented to the proctor.

The code for this part is available in audio_part.py

To do

  1. Replace the HOG based descriptor by OpenCV's DNN modules Caffe model and it will also solve the issues created by side faces and occlusion.
  2. Replace the dlib based facial landmarks with the CNN based facial landmarks as used in head_pose_detector.
  3. Make a better face spoofing model as the accuracy is not good currently.
  4. Use a smaller and faster model inplace of YOLOv3 that can give good FPS on a CPU.
  5. Add a vision based functionality: face recognition such that no one else replaces the candidate and gives the exam midway.
  6. Add a vision based functionality: id-card verification.
  7. Update README with videos of each functionality and the FPS obtained.
  8. Add documentation (docstring) in functions in codes.

Problems

Speech to text conversion which might not work well for all dialects.

Contributing

If you have any other ideas or do any step of to do consider making a pull request . Please update the README as well in the pull request.

License

This project is licensed under the MIT License - see the LICENSE.md file for details. However, the facial landmarks detection model is trained on non-commercial use datasets so I am not sure if that is allowed to be used for commercial purposes or not.

Like what I am doing

Buy Me A Coffee

proctoring-ai's People

Contributors

mayowaobisesan avatar reka-berci-hajnovics avatar stu-ball avatar vardanagarwal avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

proctoring-ai's Issues

eye tracker not outputting anything (pre-recorded video)

Follow-up from issue #46.

I tested the code as you suggested (eye_tracker.py in particular) with a pre-recorded video instead of a livestream.

I don't see any of the outputs that I normally see with the livestream (e.g Looking up, down) in the console. Can you please let me know if it's a bug or I am doing something wrong?

Note: mouth_opening_detector.py and face_detector.py work and output the results to the console.

Assertion error when head near frame ends

The face landmark code returns an error:
OpenCV(4.4.0) ..\modules\imgproc\src\resize.cpp:3929: error: (-215:Assertion failed) !ssize.empty() in function 'cv::resize'

Full Traceback:

Traceback (most recent call last):

  File "C:\Users\hp\Desktop\minor project\new\untitled0.py", line 79, in <module>
    marks = detect_marks(frame, landmark_model, face)

  File "C:\Users\hp\Desktop\minor project\new\face_landmarks.py", line 99, in detect_marks
    face_img = cv2.resize(face_img, (128, 128))

This must be due to the face that the face coordinates passed has exceeded the limits of the original image.

Frames are being dropped due to low threshold.

Looking to improve the data points we get after processing thresholds.
Looks like some frames are lost due to threshold being small.

Also, trying to see if we can improve on accuracy. Not sure if this is a issue. Would be good to benchmark the outcome.

Problem in person_and_phone

Hi!

I'm using the instructions here "https://medium.com/analytics-vidhya/count-people-in-webcam-using-yolov3-tensorflow-f407679967d5". I'm using python3.7 and opencv 4. When I execute the script, I get:

Traceback (most recent call last):
File "person_and_phone.py", line 337, in
for i in range(nums[0]):
TypeError: 'Tensor' object cannot be interpreted as an integer
[ WARN:0] global ..\modules\videoio\src\cap_msmf.cpp (435) `anonymous-namespace'::SourceReaderCB::~SourceReaderCB terminating async callback

Any idea?

Thank you very match, you do a great job!!

from sklearn.externals import joblib - face_spoofing.py

There is one difficulty in using this module. System throws an error from line # 3.

Seems that the joblib name is not available under scikit-learn module.
If using joblib or pickle module, throws an error while opening the face_spoofing.pkl.
Seems that both modules internally uses the scikit-learn.externals.joblib to open the file.

How can be resolved this....

head_pose_estimation.py OSError

I'm getting this error when trying to run it. Only possible lead I have found is that maybe saved_model.pb is corrupt, but I really have no idea.

C:\Users\Thursday\Anaconda3\lib\site-packages\tensorflow\python\framework\dtypes.py:493: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
C:\Users\Thursday\Anaconda3\lib\site-packages\tensorflow\python\framework\dtypes.py:494: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
C:\Users\Thursday\Anaconda3\lib\site-packages\tensorflow\python\framework\dtypes.py:495: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
C:\Users\Thursday\Anaconda3\lib\site-packages\tensorflow\python\framework\dtypes.py:496: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
C:\Users\Thursday\Anaconda3\lib\site-packages\tensorflow\python\framework\dtypes.py:497: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
C:\Users\Thursday\Anaconda3\lib\site-packages\tensorflow\python\framework\dtypes.py:502: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
Traceback (most recent call last):
File "head_pose_estimation.py", line 128, in
landmark_model = get_landmark_model()
File "C:\Users\Thursday\Projects\Proctoring-AI\face_landmarks.py", line 30, in get_landmark_model
model = keras.models.load_model(saved_model)
File "C:\Users\Thursday\Anaconda3\lib\site-packages\tensorflow\python\keras_impl\keras\models.py", line 240, in load_model
with h5py.File(filepath, mode='r') as f:
File "C:\Users\Thursday\Anaconda3\lib\site-packages\h5py_hl\files.py", line 408, in init
swmr=swmr)
File "C:\Users\Thursday\Anaconda3\lib\site-packages\h5py_hl\files.py", line 173, in make_fid
fid = h5f.open(name, flags, fapl=fapl)
File "h5py_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "h5py\h5f.pyx", line 88, in h5py.h5f.open
OSError: Unable to open file (unable to open file: name = 'models/pose_model', errno = 13, error message = 'Permission denied', flags = 0, o_flags = 0)

Error in downloading dlib

Hello vardhan... I'm having the issue to download dlib, tried various ways to install it but I'm not able to install it..so plz can u help me out to install dlib or the steps u followed to install dlib... Hope u will help me out to solve this.

IndexError: list index (0) out of range

I have tried to correct all the syntax which could throw the error but it still shows the index error. Can anyone help me regarding this?
this is the following error I am facing:

Traceback (most recent call last):
File "mouth_opening_detector.py", line 13, in
landmark_model = get_landmark_model()
File "/Users/shabnamsandhi/Desktop/Proctoring-AI-master/face_landmarks.py", line 30, in get_landmark_model
model = keras.models.load_model(saved_model)
File "/usr/local/lib/python3.8/site-packages/tensorflow/python/keras/saving/save.py", line 212, in load_model
return saved_model_load.load(filepath, compile, options)
File "/usr/local/lib/python3.8/site-packages/tensorflow/python/keras/saving/saved_model/load.py", line 130, in load
_read_legacy_metadata(object_graph_def, metadata)
File "/usr/local/lib/python3.8/site-packages/tensorflow/python/keras/saving/saved_model/load.py", line 179, in _read_legacy_metadata
node_paths = _generate_object_paths(object_graph_def)
File "/usr/local/lib/python3.8/site-packages/tensorflow/python/keras/saving/saved_model/load.py", line 204, in _generate_object_paths
for reference in object_graph_def.nodes[current_node].children:
IndexError: list index (0) out of range

AttributeError: 'NoneType' object has no attribute 'copy' for eye_tracker.py

Hey. I am having trouble running eye_tracker.py file. It is unable to grab the frame.

[ WARN:0] global C:\Users\appveyor\AppData\Local\Temp\1\pip-req-build-kh7iq4w7\opencv\modules\videoio\src\cap_msmf.cpp (912) CvCapture_MSMF::grabFrame videoio(MSMF): can't grab frame. Error: -2147483638
Traceback (most recent call last):
File "D:/Projects/exam_proctoring_model/Proctoring-AI-master/eye_tracker.py", line 160, in
thresh = img.copy()
AttributeError: 'NoneType' object has no attribute 'copy'

No yolov3.weights file exist?

Hi:
try to run main.py but got error, could you please help to check?

D:\project\opencv\Proctoring-AI>python main.py
D:\python\3.7.7\lib\site-packages\numpy_distributor_init.py:32: UserWarning: loaded more than 1 DLL from .libs:
D:\python\3.7.7\lib\site-packages\numpy.libs\libopenblas.NOIJJG62EMASZI6NYURL6JBKM4EVBGM7.gfortran-win_amd64.dll
D:\python\3.7.7\lib\site-packages\numpy.libs\libopenblas.TXA6YQSD3GCQQC22GEQ54J2UDCXDXHWN.gfortran-win_amd64.dll
stacklevel=1)
2020-07-07 10:34:13.058164: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'cudart64_101.dll'; dlerror: cudart64_101.dll not found
2020-07-07 10:34:13.063303: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
2020-07-07 10:34:31.033803: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'nvcuda.dll'; dlerror: nvcuda.dll not found
2020-07-07 10:34:31.045068: E tensorflow/stream_executor/cuda/cuda_driver.cc:313] failed call to cuInit: UNKNOWN ERROR (303)
2020-07-07 10:34:31.052288: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:169] retrieving CUDA diagnostic information for host: DESKTOP-FCB4CGP
2020-07-07 10:34:31.056705: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:176] hostname: DESKTOP-FCB4CGP
2020-07-07 10:34:31.068621: I tensorflow/core/platform/cpu_feature_guard.cc:143] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
2020-07-07 10:34:31.285820: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x1f307b30cf0 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-07-07 10:34:31.290499: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
Traceback (most recent call last):
File "main.py", line 22, in
load_darknet_weights(yolo, 'yolov3.weights')
File "D:\project\opencv\Proctoring-AI\yolo_helper.py", line 35, in load_darknet_weights
wf = open(weights_file, 'rb')
FileNotFoundError: [Errno 2] No such file or directory: 'yolov3.weights'

Key Error : "predict" when trying to get the landmarks

Hi i loaded the model from the same models folder, but when i run the model.signatures["predict"], it returns a key error.
I have tried printing the model.signatures.keys(). There is no "predict" key in it. only dense_1 is there.

Question about the detailed steps

I meet the problem when loading the model. Would you please provide some details about downloading the model and use this repo? Thanks!

Failed to run eye_tracker.py

When I run eye_tracker.py from lastest trunk, got the following error

Traceback (most recent call last):
File "eye_tracker.py", line 154, in
landmark_model = get_landmark_model()
File "/Users/mnchen/dev/ML/gaze_demo/Proctoring-AI/face_landmarks.py", line 30, in get_landmark_model
model = keras.models.load_model(saved_model)
File "/Users/mnchen/dev/ML/gaze_demo/gaze_demo/lib/python3.6/site-packages/tensorflow/python/keras/saving/save.py", line 212, in load_model
return saved_model_load.load(filepath, compile, options)
File "/Users/mnchen/dev/ML/gaze_demo/gaze_demo/lib/python3.6/site-packages/tensorflow/python/keras/saving/saved_model/load.py", line 130, in load
_read_legacy_metadata(object_graph_def, metadata)
File "/Users/mnchen/dev/ML/gaze_demo/gaze_demo/lib/python3.6/site-packages/tensorflow/python/keras/saving/saved_model/load.py", line 179, in _read_legacy_metadata
node_paths = _generate_object_paths(object_graph_def)
File "/Users/mnchen/dev/ML/gaze_demo/gaze_demo/lib/python3.6/site-packages/tensorflow/python/keras/saving/saved_model/load.py", line 204, in _generate_object_paths
for reference in object_graph_def.nodes[current_node].children:
IndexError: list index (0) out of range

Starting of project (Urgent)

Can anyone guide me regarding running the project?
I'm able to understand the file dependencies and relation but from where to start in running /face_detection/faces_detection.py this would work or something else?

Audio code error

Traceback (most recent call last):
File "audio_part.py", line 84, in
convert(i);
File "audio_part.py", line 37, in convert
with sr.AudioFilesound as source:
AttributeError: module 'speech_recognition' has no attribute 'AudioFilesound'

Headpose Speed can be increased - Improvement

I checked the headpose.py. It uses caffe model. It is very slow on i5 with 16 GB RAM.

It can be faster, if use the dnn from tensorflow provided by opencv(\samples\dnn\face_detector\opencv_face_detector.pbtxt).
Added the following lines in init in class FaceDetector:.

    dnn_proto_text='models/opencv_face_detector.pbtxt'
    dnn_model='models/opencv_face_detector_uint8.pb'
    self.face_net = cv2.dnn.readNetFromTensorflow(dnn_model, dnn_proto_text)

I hope this may helpful for someone....

Questions about eye_tracker

def find_eyeball_position(end_points, cx, cy):
"""Find and return the eyeball positions, i.e. left or right or top or normal"""
x_ratio = (end_points[0] - cx) / (cx - end_points[2])
y_ratio = (cy - end_points[1]) / (end_points[3] - cy)
print(x_ratio, cx, cy)
if x_ratio > 3:
return 1
elif x_ratio < 0.33:
return 2
elif y_ratio < 0.33:
return 3
else:
return 0

===========================================
Hi Your code is very helpful for me, but I am stuck with a problem.

  1. What are x_ratio and y_ratio?
  2. What is end_points?
  3. Why did you use 3 and 0.33? Do you have special reason?

Thank you so much.

Library capabilities

  1. For the eye_tracker.py, is there no way to identify if a person is looking down? From what I see, the lib can only identify left, right or up eye movements.
  2. For head_pose_estimation.py, is there no "neutral" head pose? For instance, when I am looking directly at the camera, the output keeps saying left.

Issue on eye_tracker

Traceback (most recent call last):
File "eye_tracker.py", line 189, in
eyeball_pos_left = contouring(thresh[:, 0:mid], mid, img, end_points_left)
TypeError: slice indices must be integers or None or have an index method

On screen coordinates

Hi I am actually working on a similar project..
Suppose I am opening another video on a full size frame with the gaze running in the background..
I am trying to plot the points where the person is looking at on the full size frame...
In simple terms I am trying to plot the points where the person is looking in the screen
Any idea how to solve this problem, it would be very helpful

Is the pre-trained facial landmark model included?

Hello
When I run python head_pose_estimation.py I get the following error:
OSError: Unable to open file (file read failed: time = Sun Dec 27 21:55:55 2020 , filename = 'models/pose_model', file descriptor = 3, errno = 21, error message = 'Is a directory', buf = 0x7fff051ef380, total read size = 8, bytes this sub-read = 8, bytes actually read = 18446744073709551615, offset = 0)
Is the pre-trained facial landmark model included in the folder? Or we need to train from the repo(https://github.com/yinguobing/facial-landmark-detection-hrnet )you have offered.

i am stuck with a problem in your code please check and help me

load_darknet_weights(yolo, 'yolov3.weights')
File "C:\Users\IOT\Downloads\Proctoring-AI-old_master\Proctoring-AI-old_master\yolo_helper.py", line 35, in load_darknet_weights
wf = open(weights_file, 'rb')
FileNotFoundError: [Errno 2] No such file or directory: 'yolov3.weights'

shape_68.dat missing

Nice work. I was trying to run the sample code. I seem to be missing shape_68.dat file. I see that you have tried to add it but I dont see the file.

Please help me to solve this error...

File "h5py_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "h5py\h5f.pyx", line 85, in h5py.h5f.open
OSError: Unable to open file (unable to open file: name = 'models/pose_model', errno = 13, error message = 'Permission denied', flags = 0, o_flags = 0)

how to get this model within " models/pose_model"

To take video from client device to the server.

Hi there,

I am doing  proctoring project and I need help to obtain the live video feed from the client device to the server.  I was successful in hosting my video in localhost but now to proctor the exam, I need access to the live video feed of the candidates in my server. Can you help me in this?

eye_tracker: IndexError: list index (0) out of range.

I have encountered the following error when I tried to run python3 eye_tracker.py. Am I missing something?

2020-12-15 19:15:53.573760: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
2020-12-15 19:15:53.573788: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
Traceback (most recent call last):
  File "eye_tracker.py", line 154, in <module>
    landmark_model = get_landmark_model()
  File "/home/abhayvashokan/Downloads/Proctoring-AI/face_landmarks.py", line 30, in get_landmark_model
    model = keras.models.load_model(saved_model)
  File "/home/abhayvashokan/Downloads/venv/lib/python3.8/site-packages/tensorflow/python/keras/saving/save.py", line 212, in load_model
    return saved_model_load.load(filepath, compile, options)
  File "/home/abhayvashokan/Downloads/venv/lib/python3.8/site-packages/tensorflow/python/keras/saving/saved_model/load.py", line 130, in load
    _read_legacy_metadata(object_graph_def, metadata)
  File "/home/abhayvashokan/Downloads/venv/lib/python3.8/site-packages/tensorflow/python/keras/saving/saved_model/load.py", line 179, in _read_legacy_metadata
    node_paths = _generate_object_paths(object_graph_def)
  File "/home/abhayvashokan/Downloads/venv/lib/python3.8/site-packages/tensorflow/python/keras/saving/saved_model/load.py", line 204, in _generate_object_paths
    for reference in object_graph_def.nodes[current_node].children:
IndexError: list index (0) out of range

This is the set of commands that I have executed after cloning the repo.

Executed the set of commands in the virtual environment.

pip install tensorflow
pip install opencv-python
pip install numpy==1.18.5

The version of tensorflow that is installed is 2.4.0 and that of opencv is 4.4.0.46.

If it is not too much to ask, could you provide your output to pip freeze command? I am facing dependency version issues in other codes as well.

TypeError:

hi
when i run eye_tracker.py i have a below error:
TypeError: slice indices must be integers or None or have an index method

thank you

ValueError: Shape must be rank 4 but is rank 5

Please help me to fix the error.
Traceback (most recent call last):
File "C:\Users\IOT\anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 1659, in _create_c_op
c_op = c_api.TF_FinishOperation(op_desc)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Shape must be rank 4 but is rank 5 for 'yolo_conv_1/conv2d_59/Conv2D' (op: 'Conv2D') with input shapes: [2,?,?,?,512], [1,1,512,256].

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "main.py", line 23, in
yolo = YoloV3()
File "C:\Users\IOT\Documents\Proctoring-AI-master\yolo_helper.py", line 301, in YoloV3
x = YoloConv(256, name='yolo_conv_1')((x, x_61))
File "C:\Users\IOT\Documents\Proctoring-AI-master\yolo_helper.py", line 208, in yolo_conv
return Model(inputs, x, name=name)(x_in)
File "C:\Users\IOT\anaconda3\lib\site-packages\tensorflow\python\keras\engine\base_layer.py", line 554, in call
outputs = self.call(inputs, *args, **kwargs)
File "C:\Users\IOT\anaconda3\lib\site-packages\tensorflow\python\keras\engine\network.py", line 815, in call
mask=masks)
File "C:\Users\IOT\anaconda3\lib\site-packages\tensorflow\python\keras\engine\network.py", line 1002, in _run_internal_graph
output_tensors = layer.call(computed_tensor, **kwargs)
File "C:\Users\IOT\anaconda3\lib\site-packages\tensorflow\python\keras\layers\convolutional.py", line 194, in call
outputs = self._convolution_op(inputs, self.kernel)
File "C:\Users\IOT\anaconda3\lib\site-packages\tensorflow\python\ops\nn_ops.py", line 966, in call
return self.conv_op(inp, filter)
File "C:\Users\IOT\anaconda3\lib\site-packages\tensorflow\python\ops\nn_ops.py", line 591, in call
return self.call(inp, filter)
File "C:\Users\IOT\anaconda3\lib\site-packages\tensorflow\python\ops\nn_ops.py", line 208, in call
name=self.name)
File "C:\Users\IOT\anaconda3\lib\site-packages\tensorflow\python\ops\gen_nn_ops.py", line 1112, in conv2d
data_format=data_format, dilations=dilations, name=name)
File "C:\Users\IOT\anaconda3\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 788, in _apply_op_helper
op_def=op_def)
File "C:\Users\IOT\anaconda3\lib\site-packages\tensorflow\python\util\deprecation.py", line 507, in new_func
return func(*args, **kwargs)
File "C:\Users\IOT\anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 3300, in create_op
op_def=op_def)
File "C:\Users\IOT\anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 1823, in init
control_input_ops)
File "C:\Users\IOT\anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 1662, in _create_c_op
raise ValueError(str(e))
ValueError: Shape must be rank 4 but is rank 5 for 'yolo_conv_1/conv2d_59/Conv2D' (op: 'Conv2D') with input shapes: [2,?,?,?,512], [1,1,512,256].

shape_68.dat not found

hello, i was tried to use pre-trained model from the link
but it does not detect eyes
can you share your shape_68.dat file

Error: Cannot import name 'nothing' from 'dlib_helper'

I'm getting the following error:

Exception has occurred: ImportError
cannot import name 'nothing' from 'dlib_helper' (/Proctoring-AI/dlib_helper.py)
File "/Proctoring-AI/main.py", line 13, in
from dlib_helper import (shape_to_np,

Please let me know if you need any additional details.

detect_marks() not working

Hi, I tried your code and I copied your save_model.pb file. I use my own face detection algorithm and plan to use your face landmark system. The detect_marks() is not working and it shows the error got shape [1, 128, 128, 3], but wanted [1]. Do you know what should I do to fix this ?

Thank you very much

Issues with scikit-learn for face spoofing

Would it be possible to upgrade the dependency scikit-learn for the face spoofing module to the latest? I am having issues installing the one that's required (0.19.1) with pip install and can't find it in the pypi repo.

The builds for some older versions of scikit-learn are failing as well.

eye tracking with tensorflow model

hello author,

cheers for the great work. i have some queries regarding your implementation. i read your article on how tensorflow overcomes the dlib landmarks. but while testing there are some false detections. is it because of the dataset used when training?

  1. the tensorflow version of landmarks which you linked with other author works good. but it detects the keypoints even when the face parts are covered. for example when i cover my eyes or nose, there is false detection. how can we overcome this?

Predict_proba no attribute

i got this error :
prob = clf.predict_proba(feature_vector)[0][1]
AttributeError: 'str' object has no attribute 'predict_proba'

unable load weights models/pose_model

Hello, when I run eye_tracker.py, I got the error:
File "eye_tracker.py", line 154, in <module> landmark_model = get_landmark_model()
File "~/Proctoring-AI/face_landmarks.py", line 30, in get_landmark_model model = keras.models.load_model(saved_model)
File "~/.virtualenvs/cv/lib/python3.6/site-packages/tensorflow/python/keras/_impl/keras/engine/saving.py", line 235, in load_model
with h5py.File(filepath, mode='r') as f:
File "~/.virtualenvs/cv/lib/python3.6/site-packages/h5py/_hl/files.py", line 394, in __init__ swmr=swmr)
File "~/.virtualenvs/cv/lib/python3.6/site-packages/h5py/_hl/files.py", line 170, in make_fid fid = h5f.open(name, flags, fapl=fapl)
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "h5py/h5f.pyx", line 85, in h5py.h5f.open
OSError: Unable to open file (file read failed: time = Tue Oct 27 14:43:05 2020 , filename = 'models/pose_model', file descriptor = 3, errno = 21, error message = 'Is a directory', buf = 0x7fff830498d0, total read size = 8, bytes this sub-read = 8, bytes actually read = 18446744073709551615, offset = 0)
I think the weights for detecting facial landmarks has a problem. I tried to download from pre-trained CNN weights https://github.com/yinguobing/head-pose-estimation and still not working.
Can anyone pls offer a clue?

facing issue in loading facial landmark model

ERROR:
fid = h5f.open(name, flags, fapl=fapl)
File "h5py_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "h5py\h5f.pyx", line 88, in h5py.h5f.open
OSError: Unable to open file (file signature not found)

my model path is saved_model=os.getcwd()+'/models/pose_model/saved_model.pb'

code in which i am getting this error is : model = keras.models.load_model(saved_model)

Please help me, friends. I am new to computer vision.

I also tried to find the model in this link (https://github.com/yinguobing/cnn-facial-landmark) but I am not able to find any trained model is present in this repo.

Hi, OSError, but error msg is 'Is a directory '. Any suggestions?

Hi, I am getting similar error , but error msg is 'Is a directory '. Any suggestions?

Traceback (most recent call last):
File "head_pose_estimation.py", line 131, in
landmark_model = get_landmark_model()
File "/root/Proctoring-AI/face_landmarks.py", line 31, in get_landmark_model
model = keras.models.load_model(saved_model)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/saving/hdf5_format.py", line 203, in load_model
f = h5py.File(filepath, mode='r')
File "/usr/local/lib/python3.6/dist-packages/h5py/_hl/files.py", line 408, in init
swmr=swmr)
File "/usr/local/lib/python3.6/dist-packages/h5py/_hl/files.py", line 173, in make_fid
fid = h5f.open(name, flags, fapl=fapl)
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "h5py/h5f.pyx", line 88, in h5py.h5f.open
OSError: Unable to open file (file read failed: time = Tue Oct 27 12:16:21 2020
, filename = 'models/pose_model', file descriptor = 3, errno = 21, error message = 'Is a directory', buf = 0x7ffd0fa0e910, total read size = 8, bytes this sub-read = 8, bytes actually read = 18446744073709551615, offset = 0)

Name: numpy
Version: 1.19.2

Name:tensorflow
Version:2.0.0-alpha0

Originally posted by @kshamap in #10 (comment)

Issue while loading shape_68.dat file

Instructions for updating: Call initializer instance with the dtype argument instead of passing it to the constructor Traceback (most recent call last): File ".\main.py", line 24, in <module> predictor = dlib.shape_predictor('shape_68.dat') RuntimeError: Error deserializing object of type int

Hai @vardanagarwal ,
Can you please help me to solving the above error.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.