A CNN based fully fledged Hand gesture to text converter for deaf & dumb.
Done as a part of Image And Video Processing (IIVP632C) Course at IIIT-Allahabad
DATASET LINK* If you dont want to generate your own dataset, then you can use our dataset. https://drive.google.com/drive/folders/1ejRWKWzXoNih9MT2E8m19hFtVjB9XXCk?usp=sharing
Steps to run this code on your pc :
********* STEPS TO TRAIN YOUR OWN MODEL**********
Directory Structure
- create a directory named Group_12.
- Put All the codes is "code.zip" folder to into Group_12.
- Create subdirectory Dataset as Group_12/Dataset.
- Create 28 empty folders , with each folder having the name of our gesture (these are for training purpose) (Eg of folder names are Group_12/Dataset/Atrain ,Group_12/Dataset/Btrain..and so on , likewise our code has 28 gestures)
- Create 28 more empty folders , with each folder having the name of our getsure (these are for model testing purpose) (Eg of folder names are Group_12/Dataset/Atest,Group_12/Dataset/Btest..and so on , likewise our code has 28 such gestures for testing)
- Create a subdirectory TrainedModel as Group_12/TrainedModel.
Generating the dataset
- install all the required packages mentioned in "requirements.txt" file.
- now run PalmTracker.py 56 time , each time changing the name of folder & number of images inside our code (mentioned inside the code at top)
- Start PalmTracker.py and wait for 5 sec for it to read the background , then press 's' to start capturing the gesture and continue to show the gesture until the code stops.
- The captured gesture image will be saved inside Dataset/{folder_name_given_in_code}
- Repeat these above steps 56 time to get the whole dataset.
To Train the model
- Run ModelTrainer.ipynb to train the model by changing the hyper parameters appropriately.
- Name the model inside ModelTrainer.ipynb.
- The trained model will be saved with the same name in the "TrainedModel" folder (which is to be created by you)
To Test the project
- Enter the name of model to be loaded inside MyApp2.py in line 333.
- Run MyApp2.py and wait for some time while the saved model loads up
- Press enter to start the gui process.
- wait for 5 sec for the code to read your background for background elimination.
- press 's' to start the gesture prediction.
- Now show the gestures to generate characters/sentences.
- press "q" to stop the app.
STEPS TO RUN THE TRAINED MODEL*
- create a directory named Group_12.
- put "MyApp2.py" inside this directory.
- create a subdirectory TrainedModel as Group_12/TrainedModel.
- Extract the contents of TrainedModel.zip into the TrainedModel directory.
- You can customize the mapping of gestures to text if you want to, inside MyAp2.py in line 59 to line 119 otherwise default mapping will be taken which is mentioned in doc as well as ppt.
- Run MyApp2.py to start the project.
- Press enter to start the gui process.
- wait for 5 sec for the code to read your background for background elimination.
- press 's' to start the gesture prediction.
- Now show the gestures to generate texts.
- press "q" to stop the app.