|
| |||||||||||||||
TrustNLP 2026 : The 6th Trustworthy NLP Workshop at ACL 2026 | |||||||||||||||
| Link: https://trustnlpworkshop.github.io/ | |||||||||||||||
| |||||||||||||||
Call For Papers | |||||||||||||||
|
First Call for Papers: the 6th Trustworthy NLP Workshop (TrustNLP) at ACL 2026
With the rapid advances in AI, empowered by large language models (LLMs) and natural language processing (NLP) techniques, there is an increasing integration of AI systems that directly interact with users and facilitate our daily tasks. In particular, the recent development of agentic models allows users to communicate directly with AI for complex tasks such as coding, web surfing, information seeking, and deep research. These models integrate NLP techniques with computer vision, systems engineering, and other social and physical sciences, expanding the boundaries of what AI systems can accomplish and making NLP systems omnipresent in various aspects of our everyday life. This makes the development of reliable, responsible, ethical, and safe AI increasingly important. This year, we are excited to host our TrustNLP workshop at ACL 2026, inviting participants and papers that focus on developing models that are explainable, fair, privacy-preserving, causal, and robust. In particular, we have secured sponsorship from major companies in the field, including Meta, Capital One, and Amazon. We will use the funding to promote diversity, participation, and mentoring, furthering our mission. We invite papers that focus on different aspects of safe and trustworthy language modeling. Topics of interest include (but are not limited to): - Privacy-Preserving Model Training - Unlearning and Model Editing - Fairness and Bias: Evaluation and Treatments - Model Explainability and Interpretability - Culturally-Aware and Inclusive LLMs - Accountability, Safety, and Robustness - Red-teaming, backdoor or adversarial attacks and defenses for LLM safety - Ethics, Social responsibility, and Dual-use - Causal Inference and Fair ML - Secure, Faithful, Safe, and Trustworthy Data/Language Generation - Hallucination and Unqualified Suggestion - Toxic Language Detection and Mitigation - Industry applications of Trustworthy NLP We welcome contributions that also draw upon interdisciplinary knowledge to advance Trustworthy NLP. This may include working with, synthesizing, or incorporating knowledge across expertise, sociopolitical systems, cultures, or norms. Important Dates - March 5, 2026: Workshop Paper Due Date (Direct Submission via OpenReview) - April 10, 2025 Workshop Paper Due Date (Fast-Track) - April 10, 2025: Deadline for relevant ACL Findings to submit non-archival - April 28, 2026: Notification of Acceptance - May 12, 2025: Camera-ready Papers Due - June 4, 2025: Pre-recorded video due - July 4, 2025: Workshop Date Submission Information All submissions undergo double-blind peer review (with author names and affiliations removed) by the program committee, and they will be assessed based on their relevance to the workshop themes. All submissions go through the OpenReview. To submit, use submission link. Submitted manuscripts must be 8 pages long for full papers and 4 pages long for short papers. Please follow ACL submission policies. Both full and short papers can have unlimited pages for references and appendices. Please note that at least one of the authors of each accepted paper must register for the workshop and present the paper. Template files can be found here. Fast-Track Submission If your paper has been reviewed by ACL, EMNLP, EACL, or ARR and the average rating is higher than 2.75 (either average soundness or excitement score), the paper is qualified to be submitted on the fast track. In the appendix, please include the reviews and a short statement discussing what parts of the paper have been revised. Non-Archival Option ACL workshops are traditionally archival. To allow dual submission of work, we are also including a non-archival track. If accepted, these submissions will still participate and present their work in the workshop. A reference to the paper will be hosted on the workshop website (if desired), but will not be included in the official proceedings. Please submit through OpenReview but indicate that this is a cross submission at the bottom of the submission form. You can also skip this step and inform us of your non-archival preference after the reviews. Papers accepted to the Findings of ACL 2026 may also submit non-archival versions to the workshop. Accepted and under-review papers are allowed to be submitted to the workshop, but will not be included in the proceedings. No anonymity period will be required for papers submitted to the workshop, per the latest updates to the ACL anonymity policy. However, submissions must still remain fully anonymized. Contact the organizers by email: trustnlpworkshoporganizers@gmail.com Read More: https://trustnlpworkshop.github.io/ Organizers: Kai-Wei Chang - UCLA, Amazon Nova RAI Ninareh Mehrabi - Meta Satyapriya Krishna Amazon Nova RAI Anubrata Das -University of Texas at Austin Jwala Dhamala - Amazon Nova RAI Yang Trista Cao - Amazon Nova RAI Tharindu Kumarage Amazon Nova RAI Anil Ramakrishna - Meta Christos Christodoulopoulos - Information Commissioner's Office Yixin Wan - UCLA Aram Galstyan - USC, Amazon AGI Anoop Kumar - Capital One Rahul Gupta - Amazon Nova RAI |
|