| |||||||||||||||||
ACL 2023 : 61st Annual Meeting of the Association for Computational Linguistics (ACL’23) | |||||||||||||||||
Link: https://2023.aclweb.org | |||||||||||||||||
| |||||||||||||||||
Call For Papers | |||||||||||||||||
Website: https://2023.aclweb.org/
Submission Deadline: - ARR: 15 December 2022 - START Direct: 13 January 2023 (Abstract), 20 January 2023 (Paper) Conference Dates: July 9-14 2023 Location: Toronto, Canada Theme: Reality Check Contact: - Yang Liu (General Chair) - Jordan Lee Boyd-Graber, Naoaki Okazaki, Anna Rogers (Program Chairs): acl2023-pc@googlegroups.com ============================ Call for Main Conference Papers ACL 2023 invites the submission of long and short papers featuring substantial, original, and unpublished research in all aspects of Computational Linguistics and Natural Language Processing. As in recent years, some of the presentations at the conference will be of papers accepted by the Transactions of the ACL (TACL) and by the Computational Linguistics (CL) journals. === Important Dates === Submission template available: November 1, 2022 Anonymity period for ARR papers: November 15, 2022 Submission deadline for papers submitted to ARR: December 15, 2022 Anonymity period for papers submitted through START: December 20, 2022 Abstract deadline for START direct submissions: January 13, 2023 Direct paper submission deadline: January 20, 2023 Commitment deadline for ARR papers: March 17, 2023 Author response period: March 17-24, 2023 Notification of acceptance: May 1, 2023 Withdrawal deadline: May 8, 2023 Camera-ready papers due: May 22, 2023 Tutorials: July 9, 2023 Conference: July 10-12, 2023 Workshops: July 13-14, 2023 All deadlines are 11:59PM UTC-12:00 (“anywhere on Earth”). === Submission Topics === ACL 2023 aims to have a broad technical program. Relevant topics for the conference include, but are not limited to, the following areas (in alphabetical order): - Computational Social Science and Cultural Analytics - Dialogue and Interactive Systems - Discourse and Pragmatics - Ethics and NLP - Generation - Information Extraction - Information Retrieval and Text Mining - Interpretability and Analysis of Models for NLP - Language Grounding to Vision, Robotics and Beyond - Multilingualism and Language Contact: Code-switching, Representation Learning, Cross-lingual transfer - Linguistic Theories, Cognitive Modeling, and Psycholinguistics - Machine Learning for NLP - Machine Translation - NLP Applications - Phonology, Morphology, and Word Segmentation - Question Answering - Resources and Evaluation - Semantics: Lexical - Semantics: Sentence-level Semantics, Textual Inference, and Other Areas - Sentiment Analysis, Stylistic Analysis, and Argument Mining - Speech and Multimodality - Summarization - Syntax: Tagging, Chunking and Parsing === Theme Track: Reality Check === Following the success of the ACL 2020-2022 Theme tracks, we are happy to announce that ACL 2023 will have a new theme with the goal of reflecting and stimulating discussion about the current state of development of the field of NLP. While the current systems perform much better and fail more gracefully than their rule-based predecessors, there are growing piles of evidence of other kinds of brittleness, including out-of-domain generalization, adversarial attacks, spurious patterns (both linguistic and social), lack of sensitivity to basic linguistic perturbations such as negation, over-sensitivity to perturbations that should not matter (e.g. order and wording of prompts), etc. The theme track invites empirical and theoretical research, as well position and survey papers reflecting on the ways in which reported performance improvements on NLP benchmarks are meaningful. The possible topics of discussion include (but are not limited to) the following: - How reliably do the leaderboard scores translate to improvements in real-world use of the models? - How reliably do the leaderboard scores compare competing models? - While the current NLP systems are not brittle in the same way as their predecessors, they are still brittle in other ways. - What tasks can we claim to have “solved”, if any? - Have performance improvements been accompanied by commensurate growth in the scientific understanding (of - - language, cognition, or deep learning technology)? In what ways? - Given that the authors of engineering papers are incentivized to report only the most successful results, especially for the systems that are also commercial products, what can the NLP venues do to improve reporting? The theme track submissions can be either long or short. We anticipate having a special session for this theme at the conference and a Thematic Paper Award in addition to other categories of awards. For more information, visit: https://2023.aclweb.org/calls/main_conference/ |
|