We're very enthusiastic about giving everybody the chance to make their voices heard! Public speaking is a fear facing all of us, and many talented people do not have the chance to gain praise for their work or to make their views known because of this fear and the general lack of resources.
We started by brainstorming the key pain point that speakers are experiencing today, and arrived at the key issue of the lack of opportunities to introduce themselves to real audiences while feeling like a safe place. We are therefore working to create a virtual environment in which the virtual audience can respond and provide feedback to the speaker, so that the speaker can practise speaking and improving while not feeling unsafe or worried. We have built this ecosystem on the NReal Platform.
In producing an answer in real time, we use natural language processing to extract sentiment from the user's spoken content (using Valence Aware Dictionary and Sentiment Reasoner). We have used neural networks to train the speech audio file model (using 1600+ video clips from the RAVDESS dataset). Through a two-dimensional study, we are able to construct a model that listens to the user's spoken content and produces a response that then feeds into the reactions of our virtual audience.
Our team has created an audience response animation for common emotions such as being engaged, happy, sad, shocked. We have introduced verbal input that correlates to each emotion. So, when the user wears the computer and speaks, the user is able to see the simulated audience who responds in real time.
We have also worked on improvement-focused feedback areas, such as speech speed, tonal variety, volume change. We haven't had enough time to design the front end of these features yet, but these are areas that we're looking forward to working on!