MPathy

MPathy reduces misunderstandings and promotes interpersonal empathy in digital communication by processing AI-interpreted sensor data into emotion visualisations and sending these - linked to text messages - to the chat partners. The project focused on research and conception of the idea within 8 weeks. The validation of the concept is necessary through user tests, which were no longer possible within the time frame.

Empathize

Through extensive research on the internet and reading of communication theory and psychological literature, we were able to generate key insights in the following areas.

Interview

Qualitative interviews were conducted with 4 people on the topic of digital communication. The aim was to reveal experiences, current habits and problems. Every person feels emotions differently. To be able to represent these visually, the question was therefore open as to whether people would also represent emotions differently. To validate this thesis and to discover possible similarities and patterns, the interviewees were asked to draw certain emotions. Following Key Insights of the interviews and Sketches were generated.

Question Zero

Ideate

Structure of the emotion visualisations

Since emotions cannot be firmly defined to this day and are perceived very differently, it was important to move away from terminology. Likewise, because there is seldom just one single emotion, it is always a mixture of several emotions. Accordingly, the aim was not for the emotion visualisation to be assigned to an emotion term, but to create more clarity and unambiguity through the emotion visualisation in conjunction with the written text than the text alone would achieve. The following diagram shows the process from interview sketch to visualisation.

The emotions were illustrated in an abstract visualisation. Here are 4 Examples of them.

Translation from measured value to visualisation

We deliberately decided against dividing measured values into prefabricated emotions and then visualising them. In this way, emotions would be translated from one model (measured values) via a second (language) into a third model (visualisations), which would be interpreted by the user to finally evoke real emotions again. In this way, some information is lost or distorted. To avoid the bottleneck of "language", the measured values have a (in-)direct effect on the parameters of the visualisation.The following graphic illustrates the processing of a measured human reaction into an emotion visualisation

Prototype

Recognising, sending and receiving emotions

Instead of emojis, which can be freely selected and often do not reflect the emotion that is currently prevailing, emotions are read and displayed in a preview. Emotions received are briefly displayed in large letters and can be called up again afterwards. be called up again afterwards

Set what is sent along

It is possible to set which emotions are sent and with what intensity.

Classify and change contacts according to "Trust Level

Not all emotions should be shown to all chat partners, which is why you can divide them into different categories, such as “work”, “family” and “friends”.

Pause emotions

In certain moments, you would rather not share your emotions. In this case, they can be removed for a message or paused for a certain time.