posted by organizer: lemanakoglu || 284 views || tracked by 2 users: [display]

AI-FA 2026 : SIGKDD 2026 1st Workshop on AI for Fraud and Abuse

FacebookTwitterLinkedInGoogle

 
When Aug 9, 2026 - Aug 10, 2026
Where Jeju, Korea
Submission Deadline Apr 30, 2026
Categories    fraud and abuse   machine learning   AI   anomaly detection
 

Call For Papers

1st Workshop on AI for Fraud and Abuse (AI-FAB 2026)
In conjunction with ACM SIGKDD 2026 Jeju, Korea | August 9–13, 2026

Website: https://sites.google.com/view/kdd-fraud-abuse/home

OVERVIEW

Fraud and abuse are no longer isolated domain problems but are increasingly enabled by automated agents and cross-platform generative AI frameworks. AI-FAB 2026 explores our ability to bridge domain-specific silos to identify universal patterns and scalable AI-driven defenses. Crucially, this workshop seeks to complement model performance research with the economic and quantifiable realities of fraud; we are interested beyond simple detection to model attacker ROI, address fundamental survivorship bias in labeling, and explore the optimization of friction through mechanism design. We invite submissions at the intersection of a comprehensive range of AI methodologies related to adversarial ML, industry-specific challenges for defenses, and their practical applications related to fraud detection, quantification, and robustness.

TOPICS OF INTEREST

We solicit papers on the following (non-exhaustive) list of topics:

- Adversarial and Generative AI Challenges
- Detection of synthetic identities, deepfakes, and AI-generated phishing
- Automated social engineering of LLMs/LMMs
- Synthetic data generation for defense-side data augmentation
- Adversarial Robustness and Red Teaming—evasion, poisoning, and model inversion attacks; automated adversarial testing using simulation environments, world models, and post-attack model integrity assessment.
- Adaptive & Autonomous Systems - RL and agentic frameworks for self-evolving strategies, task planning for investigations, and adapting to rapid concept drift in adversarial behavior
- Core AI and Data Mining Methodologies
- Anomaly Detection and Representation Learning – unsupervised and semi-supervised learning (SSL) for rare event detection, architectures for high-dimensional imbalanced datasets, cost-sensitive classification, and adaptive thresholding
- Survivorship bias in SSL settings – building defenses with incomplete/potentially poisoned ground truth data, false negative estimation.
- Graph Neural Networks and Graph Foundation Models (GNNs and GFMs), Semantic knowledge graphs for domains with rich expert driven corpora, Community detection, Temporal graphs
- Multimodal Intelligence—Integrating LLMs, NLP, graphs, computer vision, and other data sources for cross-channel detection
- Holistic Application Domains
- Financial Integrity – Banking, Fintech, and Digital Payment fraud (1st party and 3rd party fraud or Non-Intent to Repay models), Cryptocurrency forensics and Blockchain-based crime detection; Anti-Money Laundering (AML) and Know Your Customer (KYC) innovations
- Platform and Infrastructure Abuse – e-commerce abuse (e.g., return fraud, promo abuse) and Ad fraud (Invalid Traffic, Content Safety Violations, etc.), Cloud infrastructure exploitation and resource exhaustion attacks, and systemic manipulation of search, recommendation, and ranking algorithms
- Social and Behavioral Integrity (Trust & Safety) – Coordinated influence campaigns and digital dis/misinformation, Online toxicity, harassment, and behavioral platform abuse; and Detection of high-harm patterns (e.g., child trafficking, illicit sales)
- Fraud Economics & Mechanism Design
Modeling Attacker’s Economics: Analysis of the attacker's production function, ROI, and price elasticity in the GenAI era.
- Friction Optimization: Designing verification steps and delays (Mechanism Design) that break the attacker’s unit economics without degrading user experience.
- Open Science and Benchmarking in Fraud and Abuse
- Developing verifiable, non-proprietary benchmarks for fraud and abuse
- Federated learning and privacy-preserving detection.
- Using synthetic environments to allow industry-academic collaboration without exposing sensitive real-world data.


IMPORTANT DATES (All deadlines are 11:59 PM AoE)

Submission Deadline: April 30, 2026
Author Notification: June 4, 2026
Final Materials Due: June 22, 2026
Workshop Date: August 9, 2026 (Tentative)

SUBMISSION GUIDELINES

Format: All submissions must be PDFs in the Standard ACM Conference Proceedings Template (sigconf format).
Page Limit: 4–8 content pages (including figures/tables), excluding references.
Anonymity: Reviews are double-blind. Submissions must not list author names or affiliations.
Publication: Accepted papers will be posted on the workshop website. Note that KDD workshop papers are generally non-archival to allow for future journal submission.


ORGANIZING COMMITTEE

Leman Akoglu (Carnegie Mellon University)
Lavanya Basavaraju (U.S. Bank)
Cristián Bravo (Western University)
Holly Ferguson (Independent Researcher)
Shalini Ghosh (Google)
Panos Ipeirotis (New York University)
Dhagash Mehta (BlackRock, Inc.)
Saurabh Nagrecha (Google) — Main Contact

Related Resources

AIACT 2027   2027 11th International Conference on Artificial Intelligence, Automation and Control Technologies
Theme Collection: Sovereign AI and Digit 2026   Call for Papers: Sovereign AI and Digital Sovereignty
Cyber-AI 2026   The 2nd IEEE 2026 International Conference on Cybersecurity and AI-Based Systems (Scopus)
AI Encyclopedia 2027   Call for Articles in Elsevier's new AI Encyclopedia
Rev-AI 2026   The 2026 International Conference on Revolutionary Artificial Intelligence and Future Applications
AI in Social Sciences 2026   AI in Social Sciences (working title)
­ AI Voyage 2026   The 1st International Conference on AI Convergence & Future Maritime
NGEN-AI 2026   The 2026 International Conference on Next Generation AI Systems
ACM GoodIT 2026   ACM 6th International Conference on Information Technology for Social Good
AI-SEC 2026   The 2nd International Workshop on Artificial Intelligence for Cybersecurity