The MixedEmotions platform is a Big Data Toolbox for multilingual and multimodal emotion analysis. In a series of three webinars the toolbox was introduced. Find here some material including a full video of the webinar.
The MixedEmotion platform is built around stand alone docker modules with an orchestrator that links the modules into analysis workflows utilising MESOS for scalable cloud deployment.
Core capabilities include emotion extraction from text, audio and video with many other capabilities, such as sentiment analysis, social network analysis, entity detection and linking and sophisticated data visualisation.
You want more info? Download the handouts:
Today, emotion detection services based on speech technologies make its way to the market. Speech bears valuable information about speakers; and emotions of the speaker are one of them. Just try to imagine the potential power you can get, if you have a mining tool for emotions.
Gestures, mimics, and speech are the key ways to communicate and convey thoughts to others. However, the content of speech (e.g., words) only serves around seven percent of communications. The other parts of human communication include HOW we talk (38%) and how we move our body and perform facial mimics (55%) .
Can you imagine your alarm clock knew if you slept badly and chose your favorite song to raise your spirits? Or that your TV chose for you that movie you needed today to make you smile? Or that your car suggests you to stop to grab a cup of coffee?
These premises, which until now were part of science fiction cinema, from classics like Blade Runner to more current hits like Her, may be closer to reality. Thanks to the convergence of technologies of analysis of emotions, Big Data, and Internet of Things (IoT). Both are being researched in the European project MixedEmotions.
What’s the challenge? One problem has been that traditional approaches to data investigations have been lacking the ability to handle mixed data effectively. Enterprise systems allow to search across structured and unstructured data, but just to reveal “results” in the form of – for example – blue links. And Business Intelligence systems on the other hand are great for seeing aggregates but lack the ability to search and investigate down to the single last record.
Within the context of the MixedEmotion project we’re interested in investigating the “emotions” in texts. At the core are algorithms, which extract measurements of “Emotional states” from text. The harvested emotional data can then be measured, averaged and acted upon.
MixedEmotions finds and identifies emotions in Big Data. How are we doing this? The first step is to select an emotion classification scheme. Research into emotion has proposed several approaches to classification and characterisation of emotion. So, which one to chose?
Early work on linguistic characterisation of emotion found that emotion could largely be characterised by just three dimensions: primarily affective valence (ranging from positive to negative) and arousal (ranging from calm to excited), with a dimension they labelled “dominance” or “control” having less significance. Much work in emotion analysis has used this VAD (Valence, Arousal, Dominance) model, including the widely used dictionary of emotional significance of words “Affective Norms of English Words” (ANEW).
Facebook has expanded its Like button. The six emoji-alternatives, called “Reactions,” give Facebook users an extended palette of emotions, most of which amount to various shades of positivity.
Read more: http://www.wired.com/2015/10/facebook-reactions-design/