| |||||||||||||||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||||||||||||||
All CFPs on WikiCFP | |||||||||||||||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||||||||||||||
Present CFP : 2023 | |||||||||||||||||||||||||||||||||||||||||||||||||||||
TRECVID is the premier annual international workshop event for evaluating content-based retrieval of multimedia and digital video. The workshop organizes a set of different tracks by providing data, task definition, metrics, and evaluation protocols to participants. During the workshop, participants submit notebook papers, discuss results, and benefit from sharing their experiences about which methods work and what does not work and why?!
With its high quality and low cost, it provides an exceptional value for students, academics and industry researchers. Responses to the call for participation are requested by June 1. All registered teams will be added to a slack workspace by March 1, and all task guidelines will be finalized by April 1. Call for participation is now available: http://www-nlpir.nist.gov/projects/tv2023/tv23.call.html Draft version of the TRECVID 2023 tasks guidelines: http://www-nlpir.nist.gov/projects/tv2023/index.html A tentative schedule is available from the main website: https://www-nlpir.nist.gov/projects/tv2023/schedule.html TRECVID ends with a workshop in November / December 2023 where all teams come together to discuss their approaches in the different tasks and plan for the next year evaluation. We look forward to welcome you again as well as new teams and members! This year TRECVID is running 5 challenge tasks: 1- Ad-hoc Video Search - Given a text query, return the relevant set of videos. 2- Deep Video Understanding - Answer questions about movies 3- Video to Text - Generate a text caption describing a short video. 4- Medical Video Question Answering - retrieve instructional medical video and localize segment of interest 5- Activities in Extended Videos - Activity detection from long videos including human and/or object activities from surveillance cameras. References: Previous year's proceedings : https://trecvid.nist.gov/tv.pubs.org.html Data and Resources created : https://trecvid.nist.gov/trecvid.data.html The Scholarly Impact of TRECVid (2003-2009) http://onlinelibrary.wiley.com/doi/10.1002/asi.21494/full | |||||||||||||||||||||||||||||||||||||||||||||||||||||
|