| |||||||||||||||
Deep Neural Audio Processing@IJCNN 2018 : Special Session on 'Deep Neural Audio Processing' at IJCNN 2018 | |||||||||||||||
Link: http://www.ecomp.poli.br/~wcci2018/ijcnn-sessions/#ijcnn4 | |||||||||||||||
| |||||||||||||||
Call For Papers | |||||||||||||||
DESCRIPTION
Computational Audio Processing techniques have been largely addressed by scientists and technicians in many application areas, like entertainment, human-machine interfaces, security, forensics, and health. Depending on the problem under study, these techniques have been successfully applied to speech signal processing (speech/speaker recognition, speech enhancement, emotion and speaker state recognition, privacy-preserving speech processing), music information retrieval and automated music generation, and in generic sound processing for acoustic monitoring, acoustic scene understanding and sound separation, detection, and identification. In the case of animal vocalization analysis, some efforts have been recently spent in the automatic classification and recognition of animal species by means of their emitted sounds. In the different fields, state of the art performance has recently been obtained by using data-driven learning systems, often represented by variants of deep neural network architectures. Several challenges remain open, due to the increasing complexity of the tasks, the presence of non-stationary operating conditions, the difference between laboratory and real acoustic scenarios, and the necessity to respect hard real-time constraints, also when the amount of data to process is big and battery-powered devices are involved. In some other application contexts, the challenge is facing a scarce amount of data to be used for training, and suitable architectures and algorithms need to be designed on purpose. Moreover, also the employment of cross-domain approaches to exploit the information contained in diverse kinds of environmental audio signals are often needed, as recently investigated by some pioneering works. SCOPE AND TOPICS The aim of this special session is to provide a forum for the presentation of the most recent advancements on Deep Neural Networks algorithms applied to Digital Audio problems, with particular attention to speech analysis and enhancement, music information retrieval and generation, as well as acoustic scenes and events detection and classification, exploring also new and emerging methods, such as end-to-end and one-shot/zero-shot learning. Topics include, but are not limited to: • Computational Audio Analysis • Deep Learning Algorithms in Digital Audio • Neural Architectures for Audio Processing • Transfer, Weakly Supervised, and Reinforcement Learning for Audio • Music Information Retrieval • Music Performance Analysis • Neural Methods for Music/Speech generation and synthesis • Computational Methods for Physical Instrument Modeling • Music Content Analysis • Voice conversion • Speech and Speaker Analysis and Classification • Sound Detection and Identification • Acoustic Novelty Detection • Computational methods for Wireless Acoustic Sensor Networks • Acoustic Scene Analysis • Cross-domain Audio Analysis • Signal enhancement with neural networks • End-to-End learning for Digital Audio Applications • Privacy preserving computational speech processing • One-shot/Zero-shot learning for digital audio applications IMPORTANT DATES: • Paper Submission Deadline: 01 February 2018 • Paper Acceptance Notification Date: 15 March 2018 • Final Paper Submission & Early Registration Deadline: 1st May 2018 • IEEE IJCNN 2018: 08-13 July 2018 SPECIAL SESSION ORGANISERS Emanuele Principi, Università Politecnica delle Marche, Italy, e.principi@univpm.it Stefano Squartini, Università Politecnica delle Marche, Italy, s.squartini@univpm.it Aurelio Uncini, Università La Sapienza, Italy, aurel@ieee.org Björn Schuller, University of Passau, Germany/Imperial College London, UK, schuller@ieee.org |
|