posted by user: mirkomarras || 2937 views || tracked by 4 users: [display]

Bias 2021 : Second International Workshop on Algorithmic Bias in Search and Recommendation

FacebookTwitterLinkedInGoogle

Link: https://biasinrecsys.github.io/ecir2021/
 
When Apr 1, 2021 - Apr 1, 2021
Where ONLINE EVENT
Submission Deadline Jan 11, 2021
Notification Due Feb 15, 2021
Final Version Due Mar 1, 2021
Categories    bias   fairness   search   recommender systems
 

Call For Papers

Second International Workshop on Algorithmic Bias in Search and Recommendation (Bias 2021)

to be held as part of the 43rd European Conference on Information Retrieval (ECIR 2021)

Workshop: April 1, 2021 (provisional) - ONLINE EVENT

https://biasinrecsys.github.io/ecir2021/

-----------------------------------------------------

Important Dates

-----------------------------------------------------

Submissions: January 11, 2021
Notifications: February 15, 2021
Camera-Ready: March 1, 2021
Workshop: April 1, 2021 (provisional) - ONLINE EVENT
All deadlines are 11:59pm, AoE time (Anywhere on Earth).

------------------------------------------------------

Workshop Aims and Scope

------------------------------------------------------

Both search and recommendation algorithms provide a user with a ranking that aims to match their needs and interests. Despite the (non) personalized perspective characterizing each class of algorithms, both learn patterns from historical data, which conveys biases in terms of imbalances and inequalities.

In most cases, the trained models and, by extension, the final ranking, unfortunately strengthen these biases in the learned patterns. When a bias impacts on human beings as individuals or as groups with certain legally protected characteristics (e.g., race, gender), the inequalities reinforced by search and recommendation algorithms lead to severe societal consequences like discrimination and unfairness.

Challenges that arise in the real-world applications are focused, among others, on controlling the effects generated by popularity bias to improve the user's perceived quality of the results, supporting consumers and providers with fair rankings, and transparently explaining why a model provides a given (less) biased result. Hence, being able to detect, measure, characterize, and mitigate bias while keeping high effectiveness is a prominent and timely challenge.

BIAS 2021 will be the ECIR's workshop aimed at collecting new contributions in this emerging field and providing a common ground for interested researchers and practitioners. Specifically, BIAS 2021 will be the second edition of this dedicated event at ECIR, coming after a very successful 2020 delivering. Given the growing interest of the community in these topics, we expect that this workshop will be more and more of interest, with a stronger outcome and a wider community dialog.

--------------------------------------------------------

Workshop Keywords

--------------------------------------------------------

Information Retrieval · Recommender Systems · Data and Algorithmic Bias · Fairness

-------------------------------------------------------

Workshop Topics

-------------------------------------------------------

The workshop welcomes contributions in all topics related to algorithmic biasand fairness in search and recommendation, focused (but not limited) to:

- Data Set Collection and Preparation:
--- Managing imbalances and inequalities within data sets
--- Devising collection pipelines that lead to fair and unbiased data sets
--- Collecting data sets useful for studying potential biased and unfair situations
--- Designing procedures for creating data sets for research on bias and fairness

- Countermeasure Design and Development:
--- Conducting exploratory analysis that uncover biases
--- Designing treatments that mitigate biases (e.g., popularity bias)
--- Devising interpretable search and recommendation models
--- Providing treatment procedures whose outcomes are easily interpretable
--- Balancing inequalities among different groups of users or stakeholders

- Evaluation Protocol and Metric Formulation:
--- Conducting quantitative experimental studies on bias and unfairness
--- Defining objective metrics that consider fairness and/or bias
--- Formulating bias-aware protocols to evaluate existing algorithms
--- Evaluating existing strategies in unexplored domains
--- Comparative studies of existing evaluation protocols and strategies

- Case Study Exploration:
--- E-commerce platforms
--- Educational environments
--- Entertainment websites
--- Healthcare systems
--- Social media
--- News platforms
--- Digital libraries
--- Job portals
--- Dating platforms

-------------------------------------------------------

Submission Details

-------------------------------------------------------

All submissions must be written in English. Authors should consult ECIR paper guidelines (http://irsg.bcs.org/proceedings/ECIR_Draft_Guidelines.pdf) and Fuhr’s guide (http://sigir.org/wp-content/uploads/2018/01/p032.pdf) to avoid common IR evaluation mistakes, for the preparation of their papers. Authors should consult Springer’s authors’ guidelines (ftp://ftp.springernature.com/cs-proceeding/svproc/guidelines/Springer_Guidelines_for_Authors_of_Proceedings.pdf) and use their proceedings templates, either LaTeX (ftp://ftp.springernature.com/cs-proceeding/llncs/llncs2e.zip) or Word (ftp://ftp.springernature.com/cs-proceeding/llncs/word/splnproc1703.zip). Papers should be submitted as PDF files to Easychair at https://easychair.org/conferences/?conf=bias2021. Please be aware that at least one author per paper needs to register and attend the workshop to present the work.

We will consider three different submission types:

- Full papers (12 pages) should be clearly placed with respect to the state of the art and state the contribution of the proposal in the domain of application, even if presenting preliminary results. In particular, research papers should describe the methodology in detail, experiments should be repeatable, and a comparison with the existing approaches in the literature should be made.

- Reproducibility papers (12 pages) should repeat prior experiments using the original source code and datasets to show how, why, and when the methods work or not (replicability papers) or should repeat prior experiments, preferably using the original source code, in new contexts (e.g., different domains and datasets, different evaluation and metrics) to further generalize and validate or not previous work (reproducibility papers).

- Short or position papers (6 pages) should introduce new point of views in the workshop topics or summarize the experience of a group in the field. Practice and experience reports should present in detail real-world scenarios in which search and recommender systems are exploited.
Submissions should not exceed the indicated number of pages, including any diagrams and references.

The reviewing process will be coordinated by the organizers. Each paper will receive two reviews external to the organizing committee and one review internal to it, according to reviewers' expertise.

The accepted papers and the material generated during the meeting will be available on the workshop website. The workshop proceedings will be also published in a volume, whose details will be given soon, and indexed on DBLP and Scopus. Authors of selected papers may be invited to submit an extended version in a journal special issue.

We expect authors, PC, and the organizing committee to adhere to the ACM’s Conflict of Interest Policy (https://www.acm.org/special-interest-groups/volunteer-resources/acm-conflict-of-interest-policy) and the ACM’s Code of Ethics and Professional Conduct (https://www.acm.org/code-of-ethics).

---------------------------------------------------------

Workshop Chairs

---------------------------------------------------------

Ludovico Boratto, Eurecat - Centre Tecnológic de Catalunya (Spain)

Stefano Faralli, Unitelma Sapienza University of Rome (Italy)

Mirko Marras, École Polytechnique Fédérale de Lausanne - EPFL (Switzerland)

Giovanni Stilo, University of L’Aquila (Italy)

---------------------------------------------------------

Program Committee

---------------------------------------------------------

TBD

-----------------------------------------------------------

Contacts

-----------------------------------------------------------

For general enquiries on the workshop, please send an email to ludovico.boratto@acm.org, stefano.faralli@unitelmasapienza.it, mirko.marras@epfl.ch, and giovanni.stilo@univaq.it.

Related Resources

FLAIRS-ST XAI, Fairness, and Trust 2025   FLAIRS-38 Special Track on Explainable, Fair, and Trustworthy AI
BIAS 2024   International Workshop on Algorithmic Bias in Search and Recommendation
JCDL 2024   2024 ACM/IEEE-CS Joint Conference on Digital Libraries
GenderBiasNLP 2024   Fifth Workshop on Gender Bias in Natural Language Processing
FLAIRS-37 ST XAI, Fairness, and Trust 2024   FLAIRS-37 Special Track on Explainable, Fair, and Trustworthy AI
WSDM 2025   18th ACM International Conference on Web Search and Data Mining
DeSeRe 2024   The 1st Workshop on Decentralised Search and Recommendation
Dialogo_DBR 2024   The 19th bi-annual International Virtual Conference on Discrimination, Bias, and Repudiation
FAccTRec 2024   7th FAccTRec Workshop on Responsible Recommendation
SNAM-Special Issue 2024   Datasets, Language Resources and Algorithmic Approaches on Online Wellbeing and Social Order in Asian Languages