Giter Club home page Giter Club logo

multimodal-sentiment-analysis's People

Contributors

gangeshwark avatar soujanyaporia avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

multimodal-sentiment-analysis's Issues

package version

Thanks for sharing your works and codes! Would you please tell me which version of tensorflow you use? Because I am encountering too many issues due to the version incompatibility problem.

About MOSEI Dataset Classes

In the README.md:

MOSEI:
3 classes: happy/sad/neutral/angry/excited/frustrated 
Raw Features: (Pickle files) 
Audio: dataset/mosei/raw/audio_3way.pickle 
Text: dataset/mosei/raw/text_3way.pickle 
Video: dataset/mosei/raw/video_3way.pickle 

3 classes is happy/sad/neutral/angry/excited/frustrated ???
So the MOSEI dataset classes should be 3 or 6 ???

How was the data created?

We noticed that unimodal_mosei_3way.pickle.zip has three classes, that is, positive, negative, and neutral. However, in the original CMU-MOSEI data, there are either five classes or binary classes.
So can I ask how these data are generated and are they features or raw data?

Error when running unimodal experiment

Hi,
When I run the command:

python3 run.py --unimodal True --fusion True

the output is

Traceback (most recent call last):
File "run.py", line 384, in
unimodal(mode, args.data, args.classes)
File "run.py", line 276, in unimodal
test_feed_dict)
File "/Users/marcostexeira/tensorflow3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 905, in run
run_metadata_ptr)
File "/Users/marcostexeira/tensorflow3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1113, in _run
str(subfeed_t.get_shape())))

ValueError: Cannot feed value of shape (31, 63) for Tensor 'y:0', which has shape '(?, 63, 2)'

Is this related to running locally, rather than GPU?

What's the point of fusion?

Hi, I have tried some of the commands, and the results are:

Command1:
python run.py --unimodal True --fusion True --attention_2 True
Results:
Best epoch: 40
Best test accuracy: 0.7593085169792175
Best epoch loss: 22
Best test accuracy when loss is least: 0.7433510422706604



Command2:
python run.py --unimodal False --fusion True --attention_2 True
Results:
Best epoch: 33
Best test accuracy: 0.7619680762290955
Best epoch loss: 21
Best test accuracy when loss is least: 0.7380319237709045



Command3:
python run.py --unimodal True --fusion False
Results:
Best epoch: 12
Best test accuracy: 0.769946813583374
Best epoch loss: 11
Best test accuracy when loss is least: 0.7659574747085571



Command4:
python run.py --unimodal True --fusion True
Results:
Best epoch: 13
Best test accuracy: 0.7659574747085571
Best epoch loss: 12
Best test accuracy when loss is least: 0.7606382966041565



Command5:
python run.py --unimodal False --fusion True
Results:
Best epoch: 14
Best test accuracy: 0.769946813583374
Best epoch loss: 13
Best test accuracy when loss is least: 0.7686170339584351



Command6:
python run.py --unimodal False --fusion False
Results:
Best epoch: 12
Best test accuracy: 0.7659574747085571
Best epoch loss: 11
Best test accuracy when loss is least: 0.7606382966041565

Firstly, I think Command1 will be the best, but the result is not. The best result comes from Command3 and Command5(both without attention_2), so what's the point of doing attention_2? Besides, why do you use fscore instead of accuracy in your paper?

Secondly, the best accuracy of Command3(without fusion) is better than Command 4, so what's the point of fusion?

OpenSmile features for IEMOCAP

Hi @gangeshwark @soujanyaporia, I would like to ask you how the 100 OpenSmile features for IEMOCAP dataset were extracted. I have not been able to generate data similar to the data provided in the pkl files. Could you give a detailed explanation on how the feature was produces or provide the feature extraction scripts.

Kind regards

About Metric used: macro-fscore And attention visualization

Hi~ I have read your paper "Multi-level Multiple Attentions for Contextual Multimodal Sentiment Analysis", the metric used in the paper is macro-fscore, but from the source code you provide, the output is only accuracy on the test set, and the f1 value on the training set is no more than 80%. So I want to know how the macro-fscore is calculated on the test set.
And... could you tell me how to achieve attention visualization? Thanks.

Code for data processing

Hi,

Can you share the script used to create the pickle files from the original data files?

Thanks.

unimodal_mosi_2way.pickle if the author provides changes, the accuracy will be reduced.

When I first run “python run.py --unimodal True --fusion True” ,then " unimodal_mosi_2way.pickle" had been changed. Next I run "python run.py --unimodal False --fusion True --attention_2 True" ,the accuracy is only 0.76.
I can only use the "unimodal_mosi_2way.pickle" provided by the author to get the accuracy mentioned in the article. I want to ask if the author can provide the code for the training "unimodal_mosi_2way.pickle". Because the existing code is not achieving the effect mentioned by the author.
Thank you very much for looking at my problems while I am busy, I look forward to your reply!

not enough values to unpack (expected 13, got 7)

when I run python run.py --unimodal True --fusion True

error:

Namespace(fusion=True, unimodal=True)
Training unimodals first
('starting unimodal ', 'text')
Traceback (most recent call last):
  File "run.py", line 415, in <module>
    unimodal(mode)
  File "run.py", line 240, in unimodal
    (train_data, train_label, _, _, test_data, test_label, _, train_length, _, test_length, _, _, _) = u.load()
ValueError: not enough values to unpack (expected 13, got 7)

how can I solve it...

code for dimension equalization

Hi,
Could you share the script used to perform the dimension equalization ( d=300 ), cited on paper (Multi-level Multiple Attentions for Contextual Multimodal Sentiment Analysis) for multimodal fusion experiment ?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.