posted by organizer: schahin || 1013 views || tracked by 2 users: [display]

Affective Speech and Language Synthesis 2022 : Special Issue on Affective Speech and Language Synthesis, Generation, and Conversion

FacebookTwitterLinkedInGoogle

Link: https://www.computer.org/digital-library/journals/ta/call-for-papers-special-issue-on-affective-speech-and-language-synthesis-generation-and-conversion
 
When N/A
Where N/A
Submission Deadline Mar 31, 2022
Notification Due Jun 1, 2022
Final Version Due Jul 15, 2022
Categories    speech emotion conversion   natural language generation   affective computing   human-machine interaction
 

Call For Papers

***************
Aim & Scope
***************
As an inseparable and crucial part of spoken language, emotions play a substantial role in the human-human conversation. They convey information about a person’s needs, how one feels about the objectives of a conversation, the trustworthiness of one’s verbal communication, and more. Accordingly, substantial efforts have been made to generate affective text and speech for conversational AI, artificial storytelling, machine translation, and more. Similarly, there is a push for converting the affect in text and speech – ideally in real-time and fully preserving intelligibility, e.g., to hide one’s emotion, for creative applications and in entertainment, or even to augment training data for affect analyzing AI.

The rapid development of deep neural networks has increased the ability of computers to produce natural speech and language in many languages. Novel methodologies, including attention-based and sequence-to-sequence Text-to-Speech (TTS), have shown promise in synthesizing high-quality speech directly from text inputs. However, most TTS systems do not convey the emotional context that is omnipresent in human-human interaction. The lack of emotions in the generated speech can be assumed as a major reason for the low perceived likeability of such systems. Conversely, generative models such as WaveNet, which use raw waveforms of the audio signals instead of the text input for speech generation, can help to condition the emotions of the produced speech. Further, variations of generative adversarial networks (GANs), such as StarGANs or StyleGANs, have been successfully applied for speech-based emotion conversion and generation. Similarly, in affective natural language generation and conversion, deep-learning approaches have considerably changed the landscape and opened up new abilities based on massive language corpora and models. Yet, applications are to come at large, featuring human-alike real-time generation and conversion of affect in spoken and written language. However, the research in this field is still in its infancy and calls for a new perspective when designing neural speech and language synthesis, generation, and conversion models that consider human affects for a more natural human-AI interaction and a rich plethora of further applications.

This special issue is aimed at contributions from affective speech and language synthesis, generation, and conversion and expanding current research on current methodologies in this field and novel applications integrating such technology. We invite contributions focusing on the theoretical and practical perspectives as well as applications.

*****************************************************************************
Topics of interest for this special issue include, but are not limited to:
*****************************************************************************
- Affective speech synthesis methods
- Affective natural language generation methods
- Affect conversion in spoken and written language methods
- Integration of affective speech and language in conversational AI
- Evaluation methods and user studies for the above
- Databases for affective speech and language synthesis, generation, and conversion
- Applications of affective speech and language synthesis, generation, and conversion

*******************
Important Dates
*******************
Submission Deadline: 31 March 2022
Reviews Due: 1 May 2022
Revision Deadline: 15 July 2022
Final Decision: 1 September 2022
Publication: September 2022

**************************
Submission Guidelines
**************************
For author information and guidelines on submission criteria, please visit the TAC Author Information page (https://www.computer.org/csdl/journal/ta/write-for-us/15060). Please submit papers through the ScholarOne system (https://mc.manuscriptcentral.com/taffc-cs), and be sure to select the special issue name. Manuscripts should not be published or currently submitted for publication elsewhere. Please submit only full papers intended for review, not abstracts, to the ScholarOne portal.

****************
Guest Editors
****************
Shahin Amiriparian, University of Augsburg, Germany
Björn Schuller, Imperial College London, UK
Nabiha Asghar, Microsoft, USA
Heiga Zen, Google Research, Japan
Felix Burkhardt, audEERING / Technical University of Berlin, Germany

Related Resources

IEEE CACML 2025   2025 4th Asia Conference on Algorithms, Computing and Machine Learning (CACML 2025)
ICASSP 2024   2024 IEEE International Conference on Acoustics, Speech and Signal Processing
IEEE-Ei/Scopus-ITCC 2025   2025 5th International Conference on Information Technology and Cloud Computing (ITCC 2025)-EI Compendex
CLNLP 2025   2025 2nd International Conference on Computational Linguistics and Natural Language Processing
TSD 2025   Twenty-eighth International Conference on Text, Speech and Dialogue
CLNLP 2025   2025 2nd International Conference on Computational Linguistics and Natural Language Processing
AIST 2024   6th International Conference on Artificial Intelligence and Speech Technology (AIST2024)
SPIE-Ei/Scopus-DMNLP 2025   2025 2nd International Conference on Data Mining and Natural Language Processing (DMNLP 2025)-EI Compendex&Scopus
FLAIRS 2025   Florida Artificial Intelligence Research Society
MobiCASE 2025   16th EAI International Conference on Mobile Computing, Applications and Services