posted by user: gregorylaynor || 439 views || tracked by 4 users: [display]

AI in Evidence Synthesis 2025 : AI in Evidence Synthesis (Cochrane Evidence Synthesis and Methods)

FacebookTwitterLinkedInGoogle

Link: https://onlinelibrary.wiley.com/page/journal/28329023/homepage/call-for-papers/si-2024-000889
 
When N/A
Where N/A
Submission Deadline Feb 28, 2025
Categories    evidence synthesis   systematic reviews   AI   artificial intelligence
 

Call For Papers

AI promises to transform the way we practise evidence synthesis (ES) (Coiera & Liu, 2022). In light of the increasing quantity of primary research, making the process of ES more efficient and reliable would allow the ES community to better respond to the many demands for evidence-based decision making across healthcare.

Possibilities are expanding with the development of generative AI, and large language models (LLMs) in particular. Recent research has explored the use of LLMs for search, screening, data extraction, critical appraisal, and summarisation (Gartlehner et al., 2024; Hasan et al., 2024; Li et al., 2024; Ovelman et al., 2024; Wang et al., 2023; Zhang et al., 2024).

However, the application of AI to automate steps in the ES process is not new. Moreover, the almost two-decade long history of the field of machine learning (ML) in ES is in part a history of promises that have not been fulfilled. For example, nearly two decades after ML-prioritised screening – the most common of all applications of AI for ES – was first suggested to save screening labour (Cohen et al., 2006), the practice is not recommended in the Cochrane handbook due to a lack of reliable processes to manage the risk of missing studies (Lefebvre et al., 2023).

This is in part explained by a focus in research on demonstrating large potential gains in efficiency using retrospective evaluation, rather than on implementation, and on developing processes that enable the responsible use of AI for ES processes in active reviews where validation data is not available, or where it is scarce and produced during the review (O’Connor et al., 2019). In other words, we lack research that asks questions about how to design human-in-the-loop processes to evaluate as we go when we apply AI in ES in new projects. Or on how we can better quantify the risks of using AI, in order to weigh these appropriately against their benefits.

The enthusiasm around LLMs means that available evaluations can display varying degrees of robustness and quality. For example, "prompt engineering", whether explicit or implicit, is frequently practised using the same data used to evaluate the system, making the validation scores unreliable in predicting future performance where labelled data is not available. We must therefore prioritise, and encourage, better validation practices, especially where there is excitement about a new technology. We also need to synthesise results across validation studies, lest we fall victim to the same risks of making decisions on biassed subsets of the evidence that ES itself was designed to mitigate.

This call aims to bring together papers which bridge the gap between the demonstration of AI’s potential and the responsible implementation of AI, whether by collecting, appraising, and synthesising evidence on the use of AI across ES tasks in order to guide decision-making on its use, or through empirical or theoretical research that shows how AI can be used in active reviews, where pre-annotated validation datasets are not available. This is a joint call between the Collaboration for Environmental Evidence (CEE), Campbell and Cochrane for papers to be considered for publication in CEE’s Environmental Evidence Journal (https://www.biomedcentral.com/collections/AISESEM), the Campbell Systematic Reviews journal (https://onlinelibrary.wiley.com/journal/18911803), and Cochrane Evidence Synthesis and Methods journal. Our intention in working together reflects our joint recognition that stronger collaboration between the fields of AI and ES should be based on shared interests. We want to curate a collection of papers from our respective journals that will increase the discoverability of research in this area, foster innovation across our respective disciplines, and serve the generation of knowledge for policy making in the future, all with responsible use of AI.

Topics in AI in evidence synthesis including but not limited to:

- Evaluation of benefits and risks of AI for evidence synthesis
- Validation methods for AI in evidence synthesis
- Exploration of new types of evidence synthesis enabled by AI
- Critical studies of social implications of AI in evidence synthesis
- Studies within a review (SWAR) involving AI implementation
- Evidence syntheses on AI in evidence synthesis
- Tutorials on utilizing AI in evidence synthesis

Related Resources

IEEE-Ei/Scopus-ITCC 2025   2025 5th International Conference on Information Technology and Cloud Computing (ITCC 2025)-EI Compendex
SEAS 2025   14th International Conference on Software Engineering and Applications
ICAART 2025   17th International Conference on Agents and Artificial Intelligence
FPC 2025   Foresight Practitioner Conference 2025
SPIE-Ei/Scopus-DMNLP 2025   2025 2nd International Conference on Data Mining and Natural Language Processing (DMNLP 2025)-EI Compendex&Scopus
Topical collection Springer 2025   CFP: Sense-Making and Collective Virtues among AI Innovators. Aligning Shared Concepts and Common Goals
IEEE-Ei/Scopus-CNIOT 2025   2025 IEEE 6th International Conference on Computing, Networks and Internet of Things (CNIOT 2025) -EI Compendex
AISyS 2025   The Second International Conference on AI-based Systems and Services
AMLDS 2025   IEEE--2025 International Conference on Advanced Machine Learning and Data Science
AiMH 2025   1st International Conference on AI in Medicine and Healthcare