Giter Club home page Giter Club logo

ad-corre's People

Contributors

aliprf avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

ad-corre's Issues

Got very different accuracy testing on RAF-DB test set

Salam Ali, thanks for sharing the source codes.
I could successfully test your method on RAF-DB test set using AffectNet_6336.h5 model, but got the following error when trying to use other models e.g., RafDB_8696.h5 and Fer2013_7203.h5 on the same env.
"ValueError: bad marshal data (unknown type code)"

Also, when I test your method on RAF-DB test set using AffectNet_6336.h5 model, I get 0.67 as the accuracy which doesn't match the reported one in the paper. Is it because I have to use RafDB_8696.h5 model for testing on Raf_DB test set?
Please advise.
Thanks.

Query Regarding Data Storage Format for Training a Deep Learning Model

I'm currently in the process of training a deep learning model and I'm uncertain about the data storage format required. I'm seeking guidance on the recommended data storage format from the authors, as well as any specific data preprocessing steps that might be necessary. I'm using the XYZ deep learning framework and would appreciate any advice to better prepare my training data.

I've gone through the documentation (link to the documentation), but I haven't found explicit instructions on the data storage format. Here's an example of the data storage structure I'm currently attempting (if applicable):

Example data storage structure or code

dataset/
train/
class_1/
image1.jpg
image2.jpg
...
class_2/
image1.jpg
image2.jpg
...
...
validation/
class_1/
image1.jpg
image2.jpg
...
class_2/
image1.jpg
image2.jpg
...
...

I hope to receive recommendations from the authors regarding the data storage format and any guidance on potential data preprocessing steps required. Thank you very much!

value error while using pre-trained model

I tried to use the pre-trained model after I downloaded the .h5 file in the GitHub

but it keeps says that ValueError: bad marshal data (unknown type code)

what's wrong? do I need to download the additional file? or adjusting the version of python or other libraries?

and here's the code that I want to apply: basically determining the emotion with the webcam

`# Xception Final

import cv2
import numpy as np
from keras.models import load_model
import tensorflow as tf

def f1_metric(y_true, y_pred):
y_pred = tf.round(y_pred)
f1 = 2 * tf.reduce_sum(y_true * y_pred) / (tf.reduce_sum(y_true) + tf.reduce_sum(y_pred) + 1e-10)
return f1

Load the trained emotion recognition model. Set the path to your model file.

emotion_model = load_model('Fer2013_7203.h5', custom_objects={"f1_metric": f1_metric})
tf.keras.utils.register_keras_serializable("f1_metric")(f1_metric)

Load the OpenCV haarcascades for face detection

face_cascade = cv2.CascadeClassifier(cv2.data.haarcascades + 'haarcascade_frontalface_default.xml')

Initialize the webcam

cap = cv2.VideoCapture(0)

while True:
ret, frame = cap.read() # Read a video frame
if not ret:
break

# Detect faces in the grayscale frame
faces = face_cascade.detectMultiScale(frame, scaleFactor=1.1, minNeighbors=5, minSize=(30, 30))

for (x, y, w, h) in faces:
    # Extract the face region from the frame
    face = frame[y:y+h, x:x+w]

    # Resize the face region to match the model's input size (299x299 for Xception)
    face = cv2.resize(face, (224, 224))

    # Convert to RGB color format (Xception requires RGB)
    face = cv2.cvtColor(face, cv2.COLOR_BGR2RGB)

    # Normalize the face image
    face = face / 255.0

    # Make a prediction by passing the preprocessed face to the emotion recognition model
    emotion_prediction = emotion_model.predict(np.expand_dims(face, axis=0))

    # Get the emotion label based on the predicted class
    emotions = ["angry", "disgust", "fear", "happy", "neutral", "sad", "surprise"]
    emotion_label = emotions[np.argmax(emotion_prediction)]

    # Draw a rectangle around the detected face and label the emotion
    cv2.rectangle(frame, (x, y), (x+w, y+h), (0, 255, 0), 2)
    cv2.putText(frame, emotion_label, (x, y-10), cv2.FONT_HERSHEY_SIMPLEX, 0.9, (0, 255, 0), 2)

# Display the frame with face detection and emotion recognition
cv2.imshow("Emotion Recognition", frame)

if cv2.waitKey(1) & 0xFF == ord('q'):
    break

cap.release()
cv2.destroyAllWindows()

`

How to generate soft-landmarks?

How to generate soft-landmarks? I don't see the relevant code.😥 Both tough and tolerant teacher model use the hard-landmarks as the label

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.