posted by user: MNecak || 550 views || tracked by 1 users: [display]

EMERGE 2024 : Call for Proposals EMERGE 2024: ETHICS OF AI ALIGNMENT International Conference and Forum

FacebookTwitterLinkedInGoogle

Link: https://emerge.ifdt.bg.ac.rs/
 
When Dec 12, 2024 - Dec 13, 2024
Where Belgrade
Submission Deadline Oct 1, 2024
Notification Due Oct 21, 2024
Categories    AI   ethics   social sciences   humanities
 

Call For Papers

Call for Proposals

EMERGE 2024: ETHICS OF AI ALIGNMENT
International Conference and Forum

The Institute for Philosophy and Social Theory of the University of Belgrade and the Institute for Artificial Intelligence Research and Development of Serbia are pleased to announce the International Conference and Forum EMERGE 2024: Ethics of AI Alignment, to be held on December 12th and 13th. EMERGE is an annual event that brings together scholars, researchers, practitioners, and policymakers from around the world to discuss and debate the ethical, social, environmental, and cultural implications of emerging technologies, this year focusing on aligning artificial intelligence (AI) with human values and interests.

The goal of EMERGE 2024 is to foster enriching discussions and generate insights into how burgeoning AI technologies intersect, influence, and are incorporated into various spheres of life. Particularly, we aim to highlight potential ethical implications and chart directions of navigation for the rapidly evolving digital landscape.

Advancements in AI have ushered in a new era of technological innovation, promising to revolutionize industries, enhance productivity, and improve the quality of life. However, as AI systems become increasingly integrated into various aspects of society, questions about their ethical implications have come to the forefront of public discourse. Central to these discussions is the concept of AI alignment – ensuring that AI systems are designed and deployed in ways that align with human values, goals, and societal norms. The International Conference on the Ethics of AI Alignment seeks to explore the multifaceted ethical challenges and opportunities arising from the quest for alignment. By bringing together scholars, researchers, practitioners, and policymakers from diverse disciplines and backgrounds, the conference aims to foster critical dialogue, interdisciplinary collaboration, and insights into the ethical dimensions of AI alignment. Through a series of subtopics, participants will delve into specific ethical dilemmas, share innovative research findings, and propose solutions to address the complex ethical issues at the intersection of AI and society. As we navigate the ethical landscape of AI alignment, we must engage in thoughtful reflection, ethical deliberation, and responsible stewardship to ensure that AI technologies serve the common good and uphold fundamental principles of justice, fairness, and human dignity.

Whether you are a professional interested in the latest advancements in AI, a student exploring career paths, or simply an AI enthusiast looking to encompass the broader societal implications of the industry, EMERGE 2024 offers a comprehensive look into the ethics shaping the future of artificial intelligence.

Join us at EMERGE 2024; let us shape a better, more ethical future hand in hand with AI!

Keynote speakers:

TBA

Plenary speakers:
Jörg Matthes, University of Vienna, Austria
Joanna Zylinska, King’s College London, UK
Stefan Lorenz Sorgner, John Cabot University, Italy
Yashar Deldjoo, Polytechnic University of Bari, Italy
Henrik Carlsen, Stockholm Environment Institute, Sweden
Marko Grobelnik, Jozef Stefan Institute, Slovenia
Bruno Daniel Ferreira da Costa, Universidade da Beira Interior, Portugal
Achim Rettinger, Trier University, Germany
Mustafa Ali, Faculty of Science, Technology, Engineering & Mathematics, the Open University, UK

...more TBA

Conference Topics

To address ethical aspects of AI alignment from a multidisciplinary perspective, we invite submissions that respond to the following subtopics and resonate with the main topic:

Democracy

Global democracy is in decline, a process in no small part exacerbated by the spread of online misinformation. As AI-powered technologies continue to rapidly evolve, their intersection with democracy emerges as a critical area for exploration and ethical scrutiny. On the one hand, there are numerous examples of how AI and related technologies can be leveraged to revitalize democracy. Social media algorithms, for instance, could promote public discussion that favors facts and reasoned debate over exploiting emotions and fueling polarization. The potential of blockchain technology in countering mis- and disinformation has recently been widely discussed and explored. Generative AI could serve as a tool for moderating discussions, scaling up deliberative dialogues, and fostering consensus.

However, the promises of AI are frequently overshadowed by growing concerns about their potential to further deteriorate democracy. The proliferation of deepfakes, computational propaganda, and automated astroturfing highlights how AI can magnify the impact of online misinformation on political knowledge and preferences. Microtargeting and voter profiling remain prime concerns for voter manipulation in the face of a critical election year across the world.

This session aims to explore how AI-powered technologies can be integrated into democratic frameworks ethically and effectively to promote inclusivity, fairness, and the collective good, thus aligning the digital sphere with our shared democratic ideals. We are especially interested in contributions that examine how AI can influence electoral processes, public opinion formation, and the broader civic engagement landscape. Contributions may cover topics such as algorithmic transparency, AI-driven misinformation, the role of AI in enhancing or undermining democratic participation, and strategies for aligning AI development with democratic values and human rights.

Art & AI

AI is reshaping the art world at an unprecedented pace, raising numerous ethical concerns around AI-generated art. These concerns range from issues of authorship and intellectual property to broader societal impacts and instances of cultural appropriation.

AI systems often utilize vast datasets of existing works without the knowledge, reimbursement, or credit of the original artists. Unlike previous technologies, AI is not merely an artistic tool but is actively involved in and credited for creating art itself, sparking many controversies. Attribution thus becomes a complex issue: should credit go to the human programmer, the AI system, or the artists whose works were used as training data?

As with the integration of every modern technology into artistic practices, the role of the artist is being reevaluated and redefined in relation to AI and its impact on the art profession. Additionally, fears about the potential misuse of AI-generated art for deceptive purposes, such as deepfake manipulation or propaganda, are justified. The rise of AI-generated art also raises socio-economic concerns, as it could lead to market saturation, devaluing art, and undermining the creative process. In terms of public perception and recognition of AI art, artists play a critical role not only in navigating between human intentionality and the unpredictable outcomes generated by AI algorithms to create the artworks but also in interpreting the meaning of AI-generated art and understanding its social and cultural significance.

As technology advances and artists push boundaries in finding new uses and insights into potential futures of AI, we can anticipate the emergence of new AI-powered art forms in both technological and critical senses. Addressing present and potential AI-related issues within ethical frameworks is crucial for maintaining fairness, integrity, and accountability in the rapidly evolving world of AI-generated art.

We welcome contributions to these discussion areas:

Ethical Considerations: Authorship and intellectual property rights in AI-generated art, reactions and concerns surrounding the use of existing artworks in AI training datasets, potential misuse of AI-generated art for deceptive or malicious purposes (e.g., deepfake manipulation, propaganda), ensuring accountability in AI-generated art creation, balancing innovation with ethical considerations in the development and use of AI art generators.

Reevaluating the Artist's Role: The evolving role of artists in incorporating AI tools into their practice, balancing human intentionality with the unpredictability of AI-generated outcomes, reshaping traditional notions of authorship and creativity, and the role of artists in contextualizing and interpreting AI-generated art.

Future of Art: Anticipating groundbreaking artistic forms enabled by AI technology, exploring the potential for AI-generated art to expand creative possibilities.

Impact on the Art Market: Socio-economic implications for artists' employment in the era of AI-generated art, accessibility, and democratization of art through digital and AI technologies, and challenges to the traditional art market and valuation of artworks.

Education and Awareness: Incorporating AI literacy and ethical considerations into art education, raising awareness among artists, educators, and the public about the implications of AI on art, promoting dialogue and critical reflection on the ethical, social, and cultural dimensions of AI-generated art.


Bridging Perspectives: AI Ethics, Environmental Technology, and More-Than-Human Ecologies

In ecology and environmental engineering, AI has emerged as a powerful tool that seems well aligned with the needs and goals of our societies. AI tools have been instrumental in prompt decision-making and monitoring processes such as flood forecasting systems and predictions related to water, air, or soil quality. Computer vision techniques enable the use of satellite images for analysis, which, in addition to flood forecasting systems, is especially important for monitoring dangerous or sick animals in inaccessible areas. AI also enables the analysis of consumption patterns, providing recommendations for energy savings. Real-time monitoring is one of the main advantages of using AI; therefore, the implementation of sensor-equipped measurement stations is of paramount importance in providing datasets for AI modeling. AI serves to provide timely, accurate, and sufficient monitoring data or as an additional tool in decision-making processes to mitigate and prevent natural disasters. We are interested in research conducted from this affirmative perspective that explores technical challenges of aligning AI technologies with human needs and goals.

Yet, we also wish to address ethical, political, and social problems and complexities that are not considered enough in the development of AIs. We are interested in projects that closely address such challenges and approach AI engineering with a critical, scrutinizing eye. For example, in determining responsibility for inaccurate AI predictions, we can ask: does the fault lie with the human overseeing the AI or the machine itself? Or how should we approach the problem during emergencies when crucial evacuation information remains solely accessible to humans? Could it be that the solution to more than one ethical challenge lies in employing AI as an assistant, rather than allowing it to dominate the decision-making process? Can we make the machine intelligent, but not responsible or reasonable? How can we ensure that machines do not exceed the boundaries of human rights and ethics? Can we protect humans from themselves?

Social sciences and humanities often consider the development and use of AI technology as a challenge rather than a solution to environmental problems. The growth of digital technologies is strongly reliant and interconnected with economic growth and is the privilege of technologically advanced countries. Environmental humanities present a critical approach within the broader field of social sciences and humanities, raising many questions regarding the interrelations between the environment and economic and technological advancement. How does AI impact the “more-than-human" and “other-than-human"? How does AI change human perceptions of non-human entities? Are these entities just resources to be managed or used, or can one think differently about them in their relation to AI?

Another important perspective comes from critical energy studies and critical infrastructure studies. These areas of inquiry are interrelated because the issue of transforming the other-than-human into an energy resource and material to be consumed by AI is of key importance for environmental protection. How and where are rare resources extracted, and who benefits from the extraction? We need to critically engage with political, social, and technical systems that, through various infrastructures, enable the kind of transformations that lead to environmental degradation, devastation, and species extinction.

Furthermore, what powers AI and how is AI powered? What role does the fossil economy play in enabling AI? What is the promise of green AI? We can also ask what remains after AI. During the processes of producing the AI infrastructures, as well as the energy necessary for the functioning of the AI, distinct types of waste and discards are created. What happens when we think about the use of AI from the point of view of the waste it produces? Who and what is affected, and how? Digital and e-waste are not considered enough in scientific research. Are digital degrowth and other alternatives good enough to provide us with social models for using AI in an environmentally and socially responsible way? These questions are important not only for individual non–human species but considering the Anthropocene – the planet itself. Finally, paraphrasing Albert Einstein, we ask: can we solve the problems we have created with the same thinking that created them?



Health Tech & Health Literacy in the Context of Generative AI: Navigating Ethical Considerations in an AI-Driven Healthcare Landscape

This portion of the EMERGE 2024 conference seeks to explore questions of Health Tech and Health Literacy within the broader purview of Generative AI. This two-themed conversation is designed to dissect the impact of AI in healthcare and its potential in revolutionizing health literacy, which plays a significant role in better health outcomes.

AI and machine learning have the potential to reform healthcare by aiding in accurate and rapid disease diagnosis, developing personalized treatments, creating effective drug development strategies, and enhancing patient care. Yet, this promising horizon teems with critical ethical questions ranging from data privacy and algorithmic bias to the transparency and autonomy of AI decisions in healthcare.

Generative AI opens a vibrant avenue for elevating health literacy. By creating text that closely mimics human-made text, Generative AI could potentially facilitate much-needed comprehension in health information, fostering a population of informed healthcare consumers. This shift towards personalized health information can revolutionize patient autonomy and decision-making.

Key discussion areas will encompass:
Data Protection & Privacy: Balancing the benefits of Generative AI in personalizing health information while safeguarding sensitive health data.

Bias & Fairness: Examining the potential biases in AI algorithms and their impact on equal healthcare services.

Transparency & Explainability: Enhancing the interpretability of AI diagnoses for healthcare professionals and patients.

Autonomy & Responsibility: The ethical balance between AI-driven health technologies and the human element in healthcare.

Health Misinformation: The ethical implications of AI's role in both creating and combatting health-related misinformation.

Accessibility of Complex Health Information: The ethical aspects of AI's potential in rendering complex health information into understandable content for the public.


Education

AI alignment in education presents many pressing ethical concerns. How are AI systems currently used in education, and what are the opportunities and challenges associated with their use? How should we study them, use them, and let them shape our learning experiences? How should we make these changes ethically? The question of the regulation and ethics of AI is intensely debated today, yet we still lack guidelines.

In digital education, a strong emphasis is placed on developing digital competencies that encompass technical and cognitive skills as well as ethical principles for digital technology. What or who will be the controlling instance of ethics of AI alignment in educational settings? What ethical practices should education practitioners and participants adopt in the context of AI use? What ethical principles should guide those practices? Should we incorporate those principles into AI training, and how? Should we introduce AI alignment into school curriculums?

How should education data be treated in the context of AI implementation? What is the ethical way of dealing with education data ownership and security, considering the massive use of AI? What ethical and legal consequences can massive AI adoption lead to? How can we regulate personalized learning algorithms, and what are the implications of AI technologies on educational equity and access, i.e., the digital divide?

The digital society's increased information availability and the emergence of LLMs, such as ChatGPT, shape education practices, altering traditional roles of learners and teachers, and influencing education goals, methods, and standards. What new roles may arise?

AI-driven learning platforms, the use of AI-generated systems in education, and different theories about cognition that arise with these changes provide some guidelines for the alignment of AI in education. Further guidelines could emerge from insights into the development of AI, such as AI-driven assessment tools, machine learning, and information processing. How do we integrate these guidelines, what is there more to study, question, and consider, and what ethical principles and practices should we introduce, adopt, and follow?


Paradigms of AI

We look forward to discussing how different paradigms of AI operate, ranging from the qualitative and theoretical, such as symbolic AI, to the quantitative and empirical, such as machine learning. How are these various paradigms used in sync – or individually – to unlock the degrees of explainability and interpretability of the tech itself, to support ethical use of AI, fairness, safety, and algorithmic accountability properties.

The concept of AI alignment strives to ensure that the goals and behaviors of artificial intelligence systems are aligned with human values and preferences. While different AI paradigms may have distinct approaches to learning, reasoning, and decision-making, the overarching goal of AI alignment remains consistent across these paradigms. They may offer unique insights and challenges regarding how different paradigms, or their combinations, can provide a more comprehensive perspective on alignment issues. While all paradigms are of interest, let us mention the following:

Symbolic AI based on formal methods: In symbolic AI, knowledge is often represented explicitly using symbols and logical rules, which can make it more transparent and interpretable compared to other paradigms. This transparency can facilitate the alignment process by allowing humans to understand and verify the reasoning of AI systems. However, ensuring that the rules and goals encoded in symbolic AI systems align with human values can still be challenging, particularly as systems become more complex. Safety, fairness, privacy, and algorithmic accountability can often be guaranteed by design.

Machine learning based on artificial neural networks and data: Machine learning approaches, including deep learning, have demonstrated remarkable capabilities in pattern recognition and decision making but are often criticized for their lack of interpretability and explainability. They are sometimes referred to as black-box solutions, implying that the inner workings of the model are not easily interpretable or understandable by humans. Ensuring that the learned models and behaviors of machine learning systems align with human values requires methods for interpretability, fairness, transparency, and safety.

Neuro-Symbolic AI: Combining two AI approaches, symbolic reasoning as a transparent-box solution and neural networks, neuro-symbolic AI helps make its systems more understandable and trustworthy by blending human-defined rules with learning capabilities. This makes them better at following ethical guidelines and working with humans, ensuring they are aligned with our values.

Graph Neural Networks (GNN) and Graph Attention Networks (GAT): GNNs and GATs are designed to handle graph-structured data and capture relational information, which can be useful for tasks involving complex systems or networks. Ensuring alignment with human values may involve considerations such as fairness in graph-based recommendation systems, ethical implications of network analysis, and preserving privacy in social network data.

Spiking Neural Networks: SNNs introduce a more biologically inspired approach to AI, which may offer benefits in terms of energy efficiency, robustness, and adaptability. However, ensuring that spiking neural networks align with human values would involve understanding the emergent behavior of these networks.

New and emerging foundational works are welcome: We are seeking to establish the relationship of neural networks with classical algorithms – crafting neural networks capable of exhibiting algorithmic behavior, while obtaining properties otherwise absent in standard ML approaches such as generalization; designing alternatives to gradient descent, in the core computational model sense, etc.

Other paradigms are also welcome: Swarm intelligence, Bayesian networks, evolutionary computation, and other methods.

Different AI paradigms may offer unique challenges and opportunities for aligning with human values, preferences, and goals. Understanding the strengths and limitations of each paradigm enables the safer and more beneficial deployment of AI technology.


Fairness in Recommendation and Ranking Algorithms

With the availability of big data for automated processing, the impact of recommendation and ranking algorithms on society is increasing. To achieve fairness (at least on a statistical level) in ranking results, it is necessary that members of marginalized groups, historically discriminated against based on sensitive characteristics, are appropriately affirmed in the future.

Using AI-powered techniques in automated ranking mechanisms requires the implementation of adequate ethical guidelines to correct the bias in AI-based systems massively applied in digital channels. In other words, the focus is on fairness metrics and correcting existing biases in the data or decision-making algorithms designed for recommendation and ranking purposes. Accordingly, this call addresses the issues of preventing and correcting social bias from occurring in contemporary ranking algorithms.

Social initiatives for protecting vulnerable groups (seeking gender equality or inequality reduction) are another motive for addressing this topic. In addition, the latest regulations advocate for the rights of users to receive adequate explanations about decisions resulting from algorithmic decision-making that impacts them. Fair digital society appeals to humankind's technical support, which implies addressing the following topics:

Sources of Bias in Recommendation and Ranking Algorithms: This topic should address all stakeholders' perspectives (such as developers, users, and providers) and explain their mutual influences.

Social Antidiscrimination Frameworks and Technical Fairness Metrics: This topic should connect and adequately address legal and social initiatives with machine learning metrics.

Machine Learning Techniques for Preventing and Reducing Algorithm Bias: This topic should offer novel pre-processing, in-processing, or post-processing techniques and insight into the state-of-the-art literature.


Religion & AI

The emergence of AI technologies presents both opportunities and challenges for the evolving role of religion in the public sphere. As AI systems become increasingly integrated into society, questions arise about how they may impact religious practices, beliefs, and the expression of faith across diverse communities. AI has the potential to facilitate religious education, outreach, and even provide spiritual guidance. However, concerns also exist regarding the ethical implications of AI alignment with religious values, as well as its potential to influence religious discourse and community dynamics in ways that may require careful consideration and navigation.

We invite scholars, researchers, ethicists, technologists, religious leaders, and practitioners to participate in conversations focused on the ethical implications of AI alignment in the context of religion. We aim to explore the complex intersection of technology, faith, and morality and address the profound ethical challenges that arise when integrating artificial intelligence systems into religious contexts.

Topics of interest include, but are not limited to:

Alignment with Religious Values: How can AI systems be aligned with the diverse ethical principles and teachings of different religious traditions?

Interpretation and Adaptation: What ethical considerations are involved in interpreting religious texts and teachings for AI systems, and how can these technologies be adapted to diverse cultural and historical contexts?

Autonomy and Agency: What role should AI systems play in decision-making processes within religious communities, and how do these technologies interact with concepts of human autonomy and free will?

Ethical Governance: How can we ensure that AI technologies developed for religious purposes are governed ethically and transparently, with input from religious leaders and communities?

Cultural Sensitivity and Appropriateness: What strategies can be employed to ensure that AI systems designed for religious contexts are culturally sensitive and respectful of religious norms?

Impact on Religious Authority and Community: What are the implications of AI technologies for religious authority structures and community dynamics, and how can these technologies be integrated responsibly into religious practice?

Ethical Dilemmas and Unintended Consequences: What ethical dilemmas and unintended consequences may arise from the use of AI systems in religious contexts, and how can these challenges be addressed?

Submission Guidelines:

Authors are requested to utilize the abstract submission template, which can be accessed and completed at the following link: here. The deadline for submission is October 1st, and submissions must include:

Paper title
Abstract (500–600 words)
3–5 keywords
Name, current position, affiliation, email address, and short biography (no more than 200 words) of all authors
Preference for online or physical attendance

The conference committee will select presenters based on the relevance of submitted abstracts for selected themes. Please specify the subtopic you are submitting to. Presentations should not exceed 15 minutes. All abstracts will be published in a book of abstracts, and selected full papers will be considered for publication in an edited volume. There are no participation fees for this conference. Participants are required to cover their own travel and accommodation expenses.

Important Dates:
Submission Deadline: October 1st
Notification of Acceptance: October 21 st
Conference Dates: December 12–13

For additional details, please visit our website at https://emerge.ifdt.bg.ac.rs or contact us at emerge@ifdt.bg.ac.rs.

We look forward to your contributions and to engaging in fruitful discussions on the ethics of AI alignment.



Scientific Committee
Ljubiša Bojić, Institut za filozofiju i društvenu teoriju, Univerzitet u Beogradu / Istraživačko-razvojni institut za veštačku inteligenciju Srbije (Chair) (Serbia)
Dubravko Ćulibrk, Istraživačko-razvojni institut za veštačku inteligenciju Srbije (Serbia)
Bruno Daniel Ferreira da Costa, Universidade da Beira Interior (Portugal)
Mikhail Bukhtoyarov, University of Siberia (Russia)
Yashar Deldjoo, Politecnico di Bari (Italy)
Dejan Grba, Institute of Creativity and Innovation, University for the Creative Arts London / Xiamen University (UK/China)
Bojana Romić, Malmö universitet (Sweden)
Susanna Gordleeva, Nizhny Novgorod State University / Baltic Federal University (Russia)
Jordi Vallverdú, Universitat Autònoma de Barcelona (Spain)
Stefan Lorenz Sorgner, John Cabot University (Italy)
Corina Paraschiv, Université Paris Cité (France)
Jörg Matthes, Universität Wien (Austria)
Mustafa Ali, Faculty of Science, Technology, Engineering & Mathematics, the Open University (UK)
Ivana Krtolica, Istraživačko-razvojni institut za veštačku inteligenciju Srbije (Serbia)
Zorica Dodevska, Istraživačko-razvojni institut za veštačku inteligenciju Srbije (Serbia)
Dragiša Žunić, Istraživačko-razvojni institut za veštačku inteligenciju Srbije (Serbia)
Branislav Kisačanin, Istraživačko-razvojni institut za veštačku inteligenciju Srbije (Serbia)
Max Talanov, Istraživačko-razvojni institut za veštačku inteligenciju Srbije (Serbia)
Čedomir Markov, Institut za filozofiju i društvenu teoriju, Univerzitet u Beogradu (Serbia)
Jelena Guga, Institut za filozofiju i društvenu teoriju, Univerzitet u Beogradu (Serbia)
Simona Žikić, Institut za filozofiju i društvenu teoriju, Univerzitet u Beogradu / Fakultet za medije i komunikacije, Univerzitet Singidunum (Serbia)
Vera Mevorah, Institut za filozofiju i društvenu teoriju, Univerzitet u Beogradu (Serbia)
Vladimir Cvetković, Institut za filozofiju i društvenu teoriju, Univerzitet u Beogradu (Serbia)
Ana Lipij, Institut za filozofiju i društvenu teoriju, Univerzitet u Beogradu (Serbia)
Jelena Novaković, Institut za filozofiju i društvenu teoriju, Univerzitet u Beogradu (Serbia)
Mirjana Nećak, Institut za filozofiju i društvenu teoriju, Univerzitet u Beogradu (Serbia)

Organizing Committee
Simona Žikić, Institut za filozofiju i društvenu teoriju, Univerzitet u Beogradu / Fakultet za medije i komunikacije (Chair) (Serbia)
Ljubiša Bojić, Institut za filozofiju i društvenu teoriju, Univerzitet u Beogradu / Istraživačko-razvojni institut za veštačku inteligenciju Srbije (Serbia)
Jelena Guga, Institut za filozofiju i društvenu teoriju, Univerzitet u Beogradu (Serbia)
Čedomir Markov, Institut za filozofiju i društvenu teoriju, Univerzitet u Beogradu (Serbia)
Vera Mevorah, Institut za filozofiju i društvenu teoriju, Univerzitet u Beogradu (Serbia)
Ana Lipij, Institut za filozofiju i društvenu teoriju, Univerzitet u Beogradu (Serbia)
Jelena Novaković, Institut za filozofiju i društvenu teoriju, Univerzitet u Beogradu (Serbia)
Mirjana Nećak, Institut za filozofiju i društvenu teoriju, Univerzitet u Beogradu (Serbia)
Milan Radić, Institut za filozofiju i društvenu teoriju, Univerzitet u Beogradu (Serbia)
Knud Ryom, Aarhus Universitet (Denmark)
Ivana Krtolica, Istraživačko-razvojni institut za veštačku inteligenciju Srbije (Serbia)
Zorica Dodevska, Istraživačko-razvojni institut za veštačku inteligenciju Srbije (Serbia)
Dragiša Žunić, Istraživačko-razvojni institut za veštačku inteligenciju Srbije (Serbia)
Max Talanov, Istraživačko-razvojni institut za veštačku inteligenciju Srbije (Serbia)

Related Resources

HUSO 2025   7th Canadian International Conference on Humanities & Social Sciences 2025
Topical collection Springer 2025   CFP: Sense-Making and Collective Virtues among AI Innovators. Aligning Shared Concepts and Common Goals
ICAART 2025   17th International Conference on Agents and Artificial Intelligence
Canadian AI 2025   38th Canadian Conference on Artificial Intelligence
SPT 2024   International Conference on Signal Processing Trends
Book 2025   Call for book Chapters Mitigating the Risks of AI Deepfakes
COIT 2025   5th International Conference on Computing and Information Technology
IberLEF 2025   [IberLEF 2025] Call for Task Proposals
PJA Call for Guest Editors 2025   The Polish Journal of Aesthetics: Call for Special Issue Proposals and Guest Editor Recruitment