
We are excited to announce the next Munich AI Lecture featuring Prof. Virginia Dignum, a member of the relAI Scientific Advisory Board. She is Professor of Responsible Artificial Intelligence at Umeå University, Sweden, where she leads the AI Policy Lab. She is also senior advisor on AI policy to the Wallenberg Foundations and chair of the ACM’s Technology Policy Council.
Event Details:
🔹Title: Responsible AI: Governance, Ethics, and Sustainable Innovation
🔹Date and Time: July 9, 2025 6:30 pm
🔹Location: Plenarsaal of the Bavarian Academy of Sciences and Humanities (BAdW), Alfons-Goppel-Straße 11, 80539 Munich
Abstract
As AI systems become increasingly autonomous and embedded in socio-technical environments, balancing innovation with social responsibility grows increasingly urgent. Multi-agent systems and autonomous agents offer valuable insights into decision-making, coordination, and adaptability, yet their deployment raises critical ethical and governance challenges. How can we ensure that AI aligns with human values, operates transparently, and remains accountable within complex social and economic ecosystems? This talk explores the intersection of AI ethics, governance, and agent-based perspectives, drawing on my work in AI policy and governance, as well as prior research on agents, agent organizations, formal models, and decision-making frameworks. Recent advancements are reshaping AI not just as a technology but as a socio-technical process that functions in dynamic, multi-stakeholder environments. As such, addressing accountability, normative reasoning, and value alignment requires a multidisciplinary approach. A central focus of this talk is the role of governance structures, regulatory mechanisms, and institutional oversight in ensuring AI remains both trustworthy and adaptable. Drawing on recent AI policy research, I will examine strategies for embedding ethical constraints in AI design, the role of explainability in agent decision-making, and how multi-agent coordination informs regulatory compliance. Rather than viewing regulation as a barrier, will show that responsible governance is an enabler of sustainable innovation, driving public trust, business differentiation, and long-term technological progress. By integrating insights from agent-based modeling, AI policy frameworks, and governance strategies, this talk underscores the importance of designing AI systems that are both socially responsible and technically robust. Ultimately, ensuring AI serves the common good requires a multidisciplinary approach—one that combines formal models, ethical considerations, and adaptive policy mechanisms to create AI systems that are accountable, fair, and aligned with human values.
More information is available on the website of the Munich AI Lecture. This is the flagship speaker series about AI in Munich, co-organized by relAI