| |||||||||||||||
3DFAW 2016 : 1st Workshop and Challenge on 3D Face Alignment in the Wild @ ECCV 2016 | |||||||||||||||
Link: http://mhug.disi.unitn.it/workshop/3dfaw/ | |||||||||||||||
| |||||||||||||||
Call For Papers | |||||||||||||||
**********************************************************************
1st Workshop and Challenge on 3D Face Alignment in the Wild 3DFAW 2016 http://mhug.disi.unitn.it/workshop/3dfaw/ https://competitions.codalab.org/competitions/10261 Amsterdam, Netherlands - October 9, 2016 in conjunction with ECCV 2016 ********************************************************************** --------------- CALL FOR PAPERS --------------- Within the past 15 years, there has been increasing interest in automated facial alignment within the computer vision and machine learning communities. Face alignment -- the problem of automatically locating detailed facial landmarks across different subjects, illuminations, and viewpoints -- is critical to all face analysis applications, such as identification, facial expression and action unit analysis, and in many human computer interaction and multimedia applications. The most common approach is 2D alignment, which treats the face as a 2D object. This assumption holds as long as the face is frontal and planar. As face orientation varies from frontal, however, this assumption breaks down: 2D annotated points lose correspondence. Pose variation results in self occlusion that confounds landmark annotation. To enable alignment that is robust to head rotation and depth variation, 3D imaging and alignment have been explored. 3D alignment, however, requires special sensors for imaging or multiple images and controlled illumination. When these assumptions cannot be met, which is common, 3D alignment from 2D video or images is a potential solution. This workshop addresses the increasing interest in 3D alignment from 2D images. This topic is germane to both computer vision and multimedia communities. For computer vision, it is an exciting approach to longstanding limitations of 2D approaches. For multimedia, 3D alignment enables more powerful applications. Main track ~~~~~~~~~~ 3DFAW is intended to bring together computer vision and multimedia researchers whose work is related to 2D or 3D face alignment. We are soliciting original contributions that address a wide range of theoretical and application issues of 3D face alignment for computer vision applications and multimedia including, including but not limited to: -3D and 2D face alignment from 2D dimensional images -Model- and stereo-based 3D face reconstruction -Dense and sparse face tracking from 2D and 3D dimensional inputs -Applications of face alignment -Face alignment for embedded and mobile devices -Facial expression retargeting (avatar animation) -Face alignment-based user interfaces 3DFAW Challenge ~~~~~~~~~~~~~~~ 3DFAW Challenge evaluates 3D face alignment methods on a large, diverse corpora of multi-view face images annotated with 3D information. The corpora includes images obtained under a range of conditions from highly controlled to in-the-wild. To participate in the challenge please go to 3DFAW Challenge on CodaLab: https://competitions.codalab.org/competitions/10261 In order to obtain the data please download, fill out and sign the data license agreement (http://mhug.disi.unitn.it/workshop/3dfaw/3DFAW_EULA.pdf) and send it back to sergey.tulyakov(at)unitn.it. ---------- SUBMISSION ---------- Papers that are not blind, or do not use the template, or have more than 14 pages (excluding references) will be rejected without review. We will consider papers rejected at ECCV 2016, accompanied with a cover letter describing the differences between the ECCV and 3DFAW submissions. For the challenge: ~~~~~~~~~~~~~~~~~~ To be considered in the official ranking, either 1) Provide a description of your approach in a form of an extended abstract (2 pages). These abstracts will be made available on the 3DFAW website, but not in the published proceedings. 2) To be considered for publication in the proceedings, submit a workshop paper. All participants are invited to submit a maximum 14-pages paper (excluding references) in the standard ECCV format. Accepted papers will be included in the proceedings of the ECCV 2016 - 3D Face Alignment in the Wild (3DFAW) Challenge workshop. --------------- IMPORTANT DATES --------------- Main track (***extended***) July 27th, 2016: Full paper submission deadline August 17th, 2016: Notification of acceptance August 29th, 2016: Camera-ready deadline Challenge (***extended***) June 13th, 2016: Beginning of the competition. Training and validation data are released July 13th, 2016: Release of evaluation data without labels July 27th, 2016: Competition ends. Top ranked participated are invited to submit their work following ECCV 2016 guidelines to appear in 3DFAW workshop proceedings August 3rd, 2016: Challenge paper submission deadline August 17th, 2016: Notification of acceptance August 29th, 2016: Camera-ready deadline ------------------- WORKSHOP ORGANIZERS ------------------- Jeff Cohn, CMU/University of Pittsburgh, USA Laszlo Jeni, CMU, USA Nicu Sebe, University of Trento, Italy Sergey Tulyakov, University of Trento, Italy Lijun Yin, Binghamton University, USA --------------------------- TECHNICAL PROGRAM COMMITTEE --------------------------- Simon Lucey, Carnegie Mellon University, USA Sergio Escalera, University of Barcelona, Spain Yoichi Sato, University of Tokyo, Japan Gabor Szirtes, RealEyes Inc Jason Saragih, Oculus Inc Qiang Ji, Rensselaer Polytechnic Institute, USA Michel Valstar, University of Nottingham, UK Abhinav Dhall, Australian National University, Australia Roland Goecke, University of Canberra, Australia Jixu Chen, Magic Leap, USA Enver Sangineto, University of Trento, Italy Xiaoming Liu, Michigan State University, USA Kun Zhou, Zhejiang University, China Hatice Gunes, University of Cambridge, UK Dimitris Metaxas, Rutgers University, USA Volker Blanz, University of Siegen, Germany --------------------- JOURNAL SPECIAL ISSUE --------------------- A special issue on a top journal is planned. For more information: http://mhug.disi.unitn.it/workshop/3dfaw/ |
|