Author: Fabrizio Nunnari

  • Animating 3D avatars from multi-point videos and its application to sign languages

    Thesis proposal for students at the Saarland University, Saarbrücken, Germany. Updated: 14.01.2026 Proposed by the SCAAI Group (Social, Cognitive and Affective AI, https://scaai.dfki.de) at the German Research Center for Artificial Intelligence (DFKI), Saarbrücken, Germany ## Introduction/Problem One of the most effective approaches for translating text into sign language is through the animation of signing 3D…

  • Health literacy – AI as a game changer?!

    Our research fellow Jan Alexandersson participated at the Federal Institute for Public Health and the Federal Ministry of Health on November 19, 2025, in Berlin. An interview extract here: https://www.bioeg-fachtag.de/gesundheitskompetenz/ Take over. “The findings from the current Bitkom study and the conference organized by the Federal Institute for Public Health (BIÖG) and the Federal Ministry…

  • SLTAT 2025 publications

    On last September 16th, 2025, the SCAAI group organized the 9th international workshop on Sign Language Translation and Avatar Technologies (SLTAT2025) and contributed with 6 publications! The SLTAT presentation itself:Nunnari, F. et al. (2025) “9th International Workshop on Sign Language Translation and Avatar Technology (SLTAT 2025),” in Adjunct Proceedings of the 25th ACM International Conference on Intelligent Virtual…

  • Project Skills4Kids started

    Project Skills4Kids (Modeling socially interactive avatar behavior to support the promotion of healthy emotion regulation strategies) started on 01.07.2025.

  • Project CONFIDENCE started

    Project CONFIDENCE (Mobile interactive experiential therapy self-esteem training for treating bullying among children) started on 01.03.2025.

  • MindBot Journal paper on Frontiers in Robotics and AI

    Our new journal paper with the title “Socially interactive industrial robots: a PAD model of flow for emotional co-regulation” investigates socially interactive robots for industrial assembly.By embodying a cobot with an avatar and employing real-time emotional modeling, we aim to cultivate Flow and mitigate negative experiences like boredom. Our findings suggest a path towards more…

  • We organise the ACM Multimediate Grand Challenge MultiMediate

    ACM Multimediate Grand Challenge MultiMediate — https://multimediate-challenge.org/ In collaboration with researchers at Augsburg University, Stuttgart University, and INRIA Sophia Antipolis, the SCAAI group is organising the fifth MultiMediate challenge, focussing on engagement estimation across cultures and interaction domains. Estimating the momentary level of engagement from multi-modal participant behaviour is an important prerequisite for assistive systems…

  • We organise the BLEMORE workshop @ ACII’25

    BLEMORE @ ACII’25 — https://blemore.github.io/workshop/ In collaboration with researchers from Uppsala University and Georgian Technical University, the SCAAI group is organising BLEMORE – the workshop and competition on multimodal blended emotion recognition. It will be held at the 13th International Conference on Affective Computing and Intelligent Interaction (ACII 2025) in Canberra, Australia. BLEMORE aims to…

  • Call for papers: SLTAT 2025, 9th International Workshop on Sign Language Translation and Avatar Technology

    SLTAT 2025 Home page: https://sltat2025.github.io We (SCAAI) are proudly organising the next SLTAT 2025: 9th International Workshop on Sign Language Translation and Avatar Technology CALL FOR PAPERS!!! We are looking for quality submissions for advancing research on technology support for sign language. Check the SLTAT 2025 Home page: https://lnkd.in/dUBCEkhG The workshop will take place within the…

  • BIGEKO project meeting

    Today and tomorrow (17.-18.03.2025), we are participating to the second BIGEKO project review meeting at the Augsburg University, Germany. We will check the project status and set the roadmap for the next year. BIGEKO stands for “Sign Language Recognition Model for bidirectional translation of sign language and text including emotional information”. Check the BIGEKO project…