![]() |
| |||||||||||||
SIIRL 2026 : Applied Soft Computing - Special Issue on Interpretable Reinforcement Learning | |||||||||||||
Link: https://www.sciencedirect.com/special-issue/322834/interpretable-reinforcement-learning | |||||||||||||
| |||||||||||||
Call For Papers | |||||||||||||
Applied Soft Computing - Special Issue on Interpretable Reinforcement Learning
Reinforcement Learning (RL) has achieved significant successes in a variety of domains, from game playing to autonomous driving, control systems, and decision-making problems. However, the interpretability of RL models remains a critical challenge. Interpretable Reinforcement Learning (IRL) focuses on creating models that not only perform well but are also understandable to humans. Enhancing the interpretability of RL models could also significantly aid in addressing the reality gap—the performance difference between simulations and real-world applications—as more transparent models provide better insights into decision-making processes and facilitate smoother transitions from simulations to real environments. This field has recently gained significant attention from both the academic and industrial communities, leading to various initiatives such as the Interpretable Control Competition at GECCO 2024. IRL has also been identified as one of the main areas where soft computing techniques, such as evolutionary algorithms, may be an enabling factor. This special issue seeks to gather cutting-edge research that advances the theory, methodologies, and applications of interpretable reinforcement learning, with particular emphasis on approaches based on soft computing (such as, but not limited to, evolutionary computation). We invite high-quality submissions on topics including, but not limited to: Theoretical Foundations of Interpretable RL: - New frameworks for interpretable decision-making in RL. - Formal definitions and metrics for interpretability in RL contexts. - Analytical and empirical studies on the trade-offs between interpretability and performance. Methods and Techniques: - Techniques for extracting interpretable policies from complex RL models. - Novel algorithms that inherently produce interpretable solutions, such as evolutionary and swarm intelligence techniques. - Visualization tools and methods for RL models and policies. - Use of symbolic, rule-based, or other interpretable models in RL. Applications: - Case studies demonstrating the application of interpretable RL in real-world scenarios. - Interpretable RL in healthcare, robotics, finance, and other critical domains. - Comparative studies showing the impact of interpretability on user trust and system usability. Human-in-the-Loop Systems: - Techniques for incorporating human feedback into RL systems to improve interpretability. - Studies on the effectiveness of human-in-the-loop approaches for developing interpretable RL systems. Evaluation and Validation: - Benchmarks and datasets for evaluating interpretability in RL. - User studies assessing the interpretability of RL models and their decisions. - Validation frameworks and experimental protocols for interpretable RL. ---------------------- Manuscript submission information: Important Dates: Submission deadline: December 31, 2025 Final Decision: June 01, 2026 Paper submissions for the special issue should follow the submission format and guidelines for regular papers and be submitted at Editorial Manager®. All the papers will be peer-reviewed following Applied Soft Computing reviewing procedures. Guest editors will make an initial assessment of the suitability and scope of all submissions. Papers will be evaluated based on their originality, presentation, relevance, and contributions, as well as their suitability to the special issue. Each submission must contribute to soft computing related methodology. Papers that either lack originality or clarity in presentation or fall outside the scope of the special issue will be desk-rejected and will not be sent for review. Authors should select “VSI:ASOC_Interpretable Reinforcement Learning” when they reach the “Article Type” step in the submission process. The submitted papers must propose original research that has not been published nor is currently under review in other venues. ---------------------- Guest editors: Dr. Leonardo Lucio Custode Independent Researcher Research Interests: Interpretable and Explainable Artificial Intelligence, Reinforcement Learning, Machine Learning, Large Language Models, and Optimization. Email: leonardo.custode@gmail.com Prof. Giovanni Iacca University of Trento, Trento, Italy Research Interests: Computational Intelligence, Distributed Systems, Explainable AI, and Analysis of Biomedical Data Email: giovanni.iacca@unitn.it Prof. Eric Medvet University of Trieste, Trieste, Italy Research Interests: Evolutionary Computation (with a focus on genetic programming and grammar-guided genetic programming), Artificial Life, and the Application of Machine Learning Techniques to engineering and computer security problems, including robotics. Email: emedvet@units.it Dr. Giorgia Nadizar University of Trieste, Trieste, Italy Research Interests: Explainable AI, Evolutionary Machine Learning, Interpretable Control, Evolutionary Robotics Email: giorgia.nadizar@dia.units.it Dr. Erica Salvato University of Trieste, Trieste, Italy Research Interests: Control Systems, Artificial Intelligence, Reinforcement Learning, Robotics Email: erica.salvato@dia.units.it |
|