posted by organizer: jvision || 2495 views || tracked by 2 users: [display]

FacesMM 2019 : The 2nd IEEE International Workshop on Faces in Multimedia (2019 ICME Workshop)

FacebookTwitterLinkedInGoogle

Link: https://web.northeastern.edu/smilelab/facesmm19/index.html
 
When Jul 8, 2019 - Jul 12, 2019
Where Shanghai, China
Submission Deadline Mar 1, 2019
Notification Due Mar 20, 2019
Final Version Due Apr 15, 2019
Categories    machine vision   facial recognition   deep learning   modeling & analysis of faces
 

Call For Papers

IEEE international workshop on in conjunction with 2019 ICME

2nd Workshop on Faces in Multimedia (FacesMM)

-- To Automatically Synthesize, Recognize, Understand Faces in the Wild

###
# Call For Papers
###

There has been remarkable advances in facial recognition technologies the past several years due to the rapid development of deep learning and large-scale, labeled face collections. Thus, there are now evermore challenging image and video collections to solve emerging problems in the fields of faces and multimedia. In parallel to face recognition, researchers continue to show an increasing interest in topic of face synthesis. Works have been done using imagery, videos, and various other modalities (e.g., hand sketches, 3D models, view-points): some focus on the individual or individuals (e.g., with/without makeup, age varying, predicting a child appearance from parents, face swapping), while others leverage generative modeling for semi-supervised learning of recognition or detection systems. Besides, generative modeling are methodologies to automatically interrupt and analyze faces for a better understanding of visual context (e.g., relationships of persons in a photo, age estimation, occupation recognition). It is an age where many creative approaches and views are proposed for face synthesizing. Also, various advances are being made in other technologies involving automatic face understanding: face tracking (e.g., landmark detection, facial expression analysis, face detection), face characterization (e.g., behavioral understanding, emotion recognition), facial characteristic analysis (e.g., gait, age, gender and ethnicity recognition), group understanding via social cues (e.g., kinship), and visual sentiment analysis (e.g., temperament, arrangement). The ability to model with high certainty has significant value in both the scientific communities and the commercial market, with applications spanning topics of HCI, social-media analytics, video indexing, visual surveillance, and online vision.

The 2nd Workshop on Faces in Multimedia (FacesMM) serves a forum for researchers to review the recent progress the automatic face understanding and synthesizing in multimedia. Special interest will be given to generative-based modeling. The workshop will include two keynotes, along with peer-reviewed papers (oral and poster). Novel high-quality contributions are solicited on the following topics:

Face synthesis and morphing; works on generative modeling;

Soft biometrics; profiling faces: age, gender, ethnicity, personality, kinship, occupation, and beauty ranking;

Deep learning practice for social face problems with ambiguity including kinship verification, family recognition and retrieval;

Discovery of the social groups from faces and the context;

Mining social face relations through metadata as well as visual information;

Tracking and extraction and analysis of face models captured by mobile devices;

Face recognition in low-quality or low-resolution video or images;

Novel mathematical models & algorithms: sensors & modalities for face, body pose, action representation;

Analysis and recognition for cross-domain social media;

Novel social applications involving detection, tracking & recognition of faces;

Face analysis for sentiment analysis in social media;

Other applications involving face analysis in social media content.

###
# Previous FacesMM Workshops
###

Take a look back at last year's FacesMM workshop, https://web.northeastern.edu/smilelab/FacesMM2018/



###
# Important Dates
###

1 March 2019 Submission Deadline
20 March 2019 Notification
15 April 2018 Camera-Ready Due


###
# Author Guidelines
###

Submissions handled via CMT website: https://cmt3.research.microsoft.com/ICME2019W/Submission/Index

Following the guideline of ICME2019: http://www.icme2019.org/author_info#General_Information

6 pages (including references)
Anonymous
Using ICME template


###
# Organizers
###

Yun Fu, Northeastern University, http://www1.ece.neu.edu/~yunfu/

Joseph Robinson, Northeastern University, http://www.jrobsvision.com

Ming Shao, University of Massachusetts (Dartmouth), http://www.cis.umassd.edu/~mshao/

Siyu Xia, Southeast University (China), Nanjing, http://www.escience.cn/people/siyuxia/


###
# Contact
###

Joseph Robinson (robinson.jo@husky.neu.edu)
Department of Electrical and Computer Engineering, Northeastern University, Boston, MA, USA

Ming Shao (mshao@umassd.edu)
Computer and Information Science, University of Massachusetts Dartmouth, Dartmouth, MA, USA

Related Resources

IEEE-Ei/Scopus-ITCC 2025   2025 5th International Conference on Information Technology and Cloud Computing (ITCC 2025)-EI Compendex
ICIVC 2025   IEEE--2025 the 10th International Conference on Image, Vision and Computing (ICIVC 2025)
AMLDS 2025   IEEE--2025 International Conference on Advanced Machine Learning and Data Science
SPIE-Ei/Scopus-DMNLP 2025   2025 2nd International Conference on Data Mining and Natural Language Processing (DMNLP 2025)-EI Compendex&Scopus
IEEE BDAI 2025   IEEE--2025 the 8th International Conference on Big Data and Artificial Intelligence (BDAI 2025)
21st AIAI 2025   21st (AIAI) Artificial Intelligence Applications and Innovations
ICME 2024   International Conference on Multimedia and Expo
IJSC 2024   International Journal on Soft Computing
IEEE AMCAI 2025   IEEE Afro-Mediterranean Conference on Artificial Intelligence