Skip to content

Speech Abstract 2

rajeswarivegesna requested to merge rajeswarivegesna-master-patch-56687 into master

Recognizing human emotion has always been a fascinating task for data scientists. Emotions are subjective, people would interpret it differently. It is hard to define the notion of emotions. Deep learning systems, such as Convolutional Neural Networks (CNNs), can infer a hierarchical representation of input data that facilitates categorization. Here, we propose to learn affect-salient features for Speech Emotion Recognition (SER) using semi-CNN. The training of semi-CNN has two stages. In the first stage, unlabeled samples are used to learn candidate features by contractive convolutional neural network with reconstruction penalization. The candidate features, in the second step, are used as the input to semi-CNN to learn affect-salient, discriminative features using a novel objective function that encourages the feature saliency, orthogonality and discrimination. Our experiment results on benchmark datasets show that our approach leads to stable and robust recognition performance in complex scenes (e.g., with speaker and environment distortion), and outperforms several well-established SER features.

Edited by spandana

Merge request reports