Skip to content

Latest commit

 

History

History
10 lines (8 loc) · 959 Bytes

File metadata and controls

10 lines (8 loc) · 959 Bytes

multimodal-autoencoder-ascertain

This is a experiemt which apply Multimodal Autoencoder[1], invented by MIT Media Lab, to the ASCERTAIN dataset[2], a emotion recogintion dataset features commercial sensor and personality rating.

Reference

[1] Multimodal Autoencoder: https://github.com/natashamjaques/MultimodalAutoencoder Jaques N., Taylor S., Sano A., Picard R.,"Multimodal Autoencoder: A Deep Learning Approach to Filling In Missing Sensor Data and Enabling Better Mood Prediction", International Conference on Affective Computing and Intelligent Interaction, October 2017, Texas, USA.

[2] ASCERTAIN dataset: http://mhug.disi.unitn.it/wp-content/ASCERTAIN/ascertain.html R. Subramanian, J. Wache, M. K. Abadi, R. L. Vieriu, S. Winkler and N. Sebe, "ASCERTAIN: Emotion and Personality Recognition Using Commercial Sensors," in IEEE Transactions on Affective Computing, vol. 9, no. 2, pp. 147-160, 1 April-June 2018, doi: 10.1109/TAFFC.2016.2625250.