MELD: A Multimodal Multi-Party Dataset for Emotion Recognition in Conversation
-
Updated
Mar 10, 2024 - Python
MELD: A Multimodal Multi-Party Dataset for Emotion Recognition in Conversation
Lightweight and Interpretable ML Model for Speech Emotion Recognition and Ambiguity Resolution (trained on IEMOCAP dataset)
A collection of datasets for the purpose of emotion recognition/detection in speech.
The code for our INTERSPEECH 2020 paper - Jointly Fine-Tuning "BERT-like'" Self Supervised Models to Improve Multimodal Speech Emotion Recognition
The code for our IEEE ACCESS (2020) paper Multimodal Emotion Recognition with Transformer-Based Self Supervised Feature Fusion.
Human Emotion Understanding using multimodal dataset.
Offical implementation of paper "MSAF: Multimodal Split Attention Fusion"
😎 Awesome lists about Speech Emotion Recognition
A survey of deep multimodal emotion recognition.
The repo contains an audio emotion detection model, facial emotion detection model, and a model that combines both these models to predict emotions from a video
A Tensorflow implementation of Speech Emotion Recognition using Audio signals and Text Data
SERVER: Multi-modal Speech Emotion Recognition using Transformer-based and Vision-based Embeddings
This repository provides an official implementation for the paper MMA-DFER: MultiModal Adaptation of unimodal models for Dynamic Facial Expression Recognition in-the-wild.
audio-text multimodal emotion recognition model which is robust to missing data
Emotion recognition from Speech & Text using different heterogeneous ensemble learning methods
This API utilizes a pre-trained model for emotion recognition from audio files. It accepts audio files as input, processes them using the pre-trained model, and returns the predicted emotion along with the confidence score. The API leverages the FastAPI framework for easy development and deployment.
Published in Springer Multimedia Tools and Applications Journal.
All experiments were done to classify multimodal data.
Official repo for "Multi-Corpus Emotion Recognition Method based on Cross-Modal Gated Attention Fusion" in INTERSPEECH 2024
Add a description, image, and links to the multimodal-emotion-recognition topic page so that developers can more easily learn about it.
To associate your repository with the multimodal-emotion-recognition topic, visit your repo's landing page and select "manage topics."