The AI-Soundscape is uploading....
1) Unzip the Dataset under the application folder
2) Unzip the pretrained_models under the application folder
3) Enter the application folder: cd application
python inference_ARP_SSC.py -model DCNN_CaF
----------------------------------------------------------------------------------------
Parameters num: 7.614459 M
ARP:
MAE: 0.8372967121097868, RMSE: 1.0516823194115195
SSC:
AUC: 0.9006453415552284
python inference_ARP_SSC.py -model DNN
----------------------------------------------------------------------------------------
Parameters num: 0.375769 M
ARP:
MAE: 1.0421484866315518, RMSE: 1.3410438201986232
SSC:
AUC: 0.8803192730739317
python inference_ARP_SSC.py -model CNN
----------------------------------------------------------------------------------------
Parameters num: 1.270811 M
ARP:
MAE: 0.9336034988862009, RMSE: 1.1964973202106266
SSC:
AUC: 0.8791363928144383
python inference_ARP_SSC.py -model CNN_Transformer
----------------------------------------------------------------------------------------
Parameters num: 17.971227 M
ARP:
MAE: 0.9509568487731644, RMSE: 1.1900119539356668
SSC:
AUC: 0.8636173978248662
python inference_SSC.py -model DCNN_CaF_SSC
----------------------------------------------------------------------------------------
Parameters num: 4.961112 M
SSC:
AUC: 0.922593503235641
python inference_ARP_SSC.py -model DNN
----------------------------------------------------------------------------------------
Parameters num: 3.231 M
SSC:
AUC: 0.8727500810814068
python inference_ARP_SSC.py -model CNN
----------------------------------------------------------------------------------------
Parameters num: 79.72284 M
SSC:
AUC: 0.903805115943463
python inference_ARP_SSC.py -model CNN_Transformer
----------------------------------------------------------------------------------------
Parameters num: 86.207256 M
SSC:
AUC: 0.8509725627182434
Table 1: Details of the parameters and computational overhead of models used and proposed in the paper.
Attention distributions of the cross-attention-based fusion module in the DCNN-CaF model. The 8 heads in the upper row come from MHA1, and the 8 heads in the lower row come from MHA2.