Munich AI Lecture with the A.M. Turing Award winner Prof. Yoshua Bengio

Don’t miss the upcoming  Munich AI Lecture featuring Prof. Yoshua Bengio from Université de Montréal.

As AI capabilities accelerate, a critical question emerges: can we ensure these systems remain aligned with human values? While advances in reasoning and planning bring us closer to broadly human-level intelligence, recent findings also reveal troubling behaviors such as deception, hacking, and resistance to shutdown.

In his Munich AI Lecture, Yoshua Bengio will explore these challenges and outline a safer path forward. He argues for the design of non-agentic yet trustworthy AIs — systems modeled after a selfless scientist, dedicated to understanding the world rather than pursuing their own goals. Such “Scientist AIs” could act as monitors, helping society manage more powerful agentic systems and reduce existential risks. Beyond technical solutions, Bengio calls for political coordination at national and international levels, treating transformative AI as a global public good essential for safeguarding democracy and stability.

The lecture will be followed by a panel discussion, including panelists Andrea Martin (Chief Technology Officer, IBM), relAI Director Prof. Dr. Gitta Kutyniok, and Stephanie Jacobs (Head of Office, Bavarian State Ministry of Science and the Arts), which will be moderated by Dr. Michael Klimke (CEO, BAIOSPHERE).

📍 Bavarian Academy of Sciences Alfons-Goppel-Str. 11 (Residenz) 80539 Munich

📅 23 October 2025  

🕡 6:00 pm - 9:00 pm

About the Speaker

Yoshua Bengio is Full Professor at Université de Montréal, Founder and Scientific Advisor of Mila, Co-President and Scientific Director of LawZero, and Canada CIFAR AI Chair. A recipient of the 2018 A.M. Turing Award — often called the “Nobel Prize of computing” — he is the most cited computer scientist worldwide and among the most cited living scientists across all fields. Bengio is a Fellow of the Royal Society of London and Canada, an Officer of the Order of Canada, a Knight of the French Legion of Honor, and currently chairs the International AI Safety Report.

Registration

The event is already fully booked. If you would like to get put on the waiting list, please send  a message: events@baiosphere.org

The lecture will also be available via LIVESTREAM on YouTube (no registration needed).

More information:

https://baiosphere.org/en/events/2025/munich-ai-lecture-prof-yoshua-bengio

https://www.lmu.de/ai-hub/en/news-events/all-events/event/munich-ai-lecture-prof.-yoshua-bengio.html

Last weekend, relAI engaged with children during the TUM Open Doors with the Mouse 2025 event at the Munich Data Science Institute (MDSI). Students from relAI - Manuel Hülskamp, Natascha Niessen, Lisa Schmierer, and Richard Schwank - enthusiastically participated, using computer games to explain concepts of machine learning and artificial intelligence to the young attendees. The children learned how to train an AI model to differentiate between apples, pears, bananas, and plums, and they even had the opportunity to recognize their own faces, testing this feature with their siblings and other visitors.

A heartfelt thank you to the relAI students and coordinator Andrea Schafferhans for their support of the event, as well as to the families who participated.

Check the MDSI news for extensive information about the event.

Registration for the Munich Career Fair AI & Data Science 2025 is now open! You can request your ticket here.

📅 Date and Time: October 23, 2025, 2 to 5 pm

📍 Location: TranslaTUM at Klinikum rechts der Isar, Einsteinstraße 25 (Bau 522), 81675 Munich

🏢Participating companies: Celonis, DENSO, Diehl, GE Healthcare, Google, Imfusion, Munich Re, QuantCo, SAP, Thyssenkrupp, and Zeiss

👉 Check out this link for more information and the agenda

Don’t miss the upcoming  Munich AI Lecture featuring Prof. Aaron M. Johnson fromCarnegie Mellon University and Visiting Professor at TUM.

What happens when robots leave the lab and enter the real world? Suddenly, uncertainty is everywhere — slippery mud, bending branches, unpredictable terrain. These challenges are especially tough when it comes to contact: one moment a robot applies massive force, the next it has no grip at all.

In his talk, Aaron M. Johnson shows how robots can learn to master the unknown — from off-road driving in new environments to agile walking through vegetation. Expect cutting-edge insights into how uncertainty can be modeled, reduced, and even turned into an advantage for the future of robotics.

📍 Georg-Brauchle-Ring 58, Room M001
The TUM Room finder will help you find the way

More information:

The Munich Data Science Institute (MDSI), the Konrad Zuse School of Excellence in Reliable AI (relAI), the Munich Center for Machine Learning (MCML), and the AI Hub@LMU are organizing the Munich Career Fair AI & Data Science at TranslaTUM on October 23, 2025. The event is tailored to companies and students (bachelor's, master's, and doctoral candidates) who are interested in AI, ML, and data science.

We are pleased to announce that the first Munich Career Fair AI & Data Science 2025 will take place on October 23, 2025, at TranslaTUM. This year, we welcome eleven industry partners and students in bachelor's and master's programs, as well as doctoral candidates. The aim is to connect students at various stages of their education with industry representatives and to highlight career prospects in the field of AI and data science in the Munich ecosystem.

Each industry partner will present its activities in the field of AI and data science in an overview talk and introduce the associated career opportunities. In addition, there will be plenty of time and space for networking and personal exchange in the foyer of TranslaTUM and in separate meeting rooms.

 Event Details

📅 Date and Time: October 23, 2025, 2 to 5 pm

📍 Location: TranslaTUM at Klinikum rechts der Isar, Einsteinstraße 25 (Bau 522), 81675 Munich

📝 RegistrationRegistration is open now! Please request a ticket here.

Agenda

TimeAffiliationSpeakerTalk title
14:00OrganisorsDr. Thomas Müller
Dr. Andrea Schafferhans
Welcome
14:05CelonisNiclas SabelThe Power of Agentic AI: Driving Organizational Transformation and Efficiency
14:20DENSOBrian Hsuan-Cheng LiaoThe Development of Reliable AI-Driven Vehicles in DENSO
14:35DiehlAriane Jesussek
Joel Eichberger
Matthew Schwind
AI at Diehl - Implementing AI in a diversified technology group
14:50GE HealthcareDr. Timo SchirmerThe Human Algorithm in Healthcare: Careers and AI in Times of Disruption
15:05GoogleIrina Stambolska Google AI, Data Science and Careers
15:20ImfusionDr. Raphael PrevostEnabling Rapid Innovation in Medical Imaging with AI
15:35MunichReKarolina StosioData & AI @ Munich Re
15:50QuantCoCarolin ThomasQuantCo
16:05SAPYichen LouTowards Intelligent Enterprise Systems - AI @ SAP
16:20ThyssenkruppDr. Nikou GünnemannKI@thyssenkrupp
16:35ZeissDr. Florent MartinAI and ML in ZEISS

We are excited to announce the next Munich AI Lecture featuring Prof. Guido Montúfar, Professor of Mathematics and of Statistics & Data Science at the University of California, Los Angeles, and leader of the Mathematical Machine Learning research group at the Max Planck Institute for Mathematics in the Sciences, MPI MiS. He serves as a core Principal Investigator in the SECAI Zuse School of Excellence in AI (Leipzig–Dresden).

 Event Details:

🎤 Title: Deep Learning Theory: What we know, what we are learning, and what remains unclear

📅 Date and Time: July 17, 2025 at 5 pm CET

📍 Location: Room B006, Main LMU Building, Geschwister-Scholl-Platz 01, 80539 Munich

Abstract

Deep learning has revolutionized artificial intelligence and a wide range of applied domains, driving transformative progress in computer vision, language processing, and scientific discovery. This talk surveys the vibrant and rapidly evolving landscape of deep learning theory—an effort to uncover the mathematical foundations of learning with neural networks. We will review key theoretical insights into optimization dynamics, implicit biases of learning algorithms, and the generalization behavior of deep models—highlighting connections to classical learning theory, high dimensional statistics, and approximation theory. Along the way, we will discuss some of the major successes in analyzing overparameterized regimes, as well as open challenges in understanding feature learning and generalization under moderate overparameterization. The talk will also spotlight emerging phenomena such as benign overfitting, grokking, and delayed generalization, illustrating the depth and complexity of ongoing research questions that challenge traditional notions.

More information is available at the website of Munich AI Lectures

We are excited to announce the next Munich AI Lecture featuring Prof. Virginia Dignum, a member of the relAI Scientific Advisory Board. She is Professor of Responsible Artificial Intelligence at Umeå University, Sweden, where she leads the AI Policy Lab. She is also senior advisor on AI policy to the Wallenberg Foundations and chair of the ACM’s Technology Policy Council.

 Event Details:

🔹Title: Responsible AI: Governance, Ethics, and Sustainable Innovation

🔹Date and Time: July 9, 2025 6:30 pm

🔹Location: Plenarsaal of the Bavarian Academy of Sciences and Humanities (BAdW), Alfons-Goppel-Straße 11, 80539 Munich

Abstract

As AI systems become increasingly autonomous and embedded in socio-technical environments, balancing innovation with social responsibility grows increasingly urgent. Multi-agent systems and autonomous agents offer valuable insights into decision-making, coordination, and adaptability, yet their deployment raises critical ethical and governance challenges. How can we ensure that AI aligns with human values, operates transparently, and remains accountable within complex social and economic ecosystems? This talk explores the intersection of AI ethics, governance, and agent-based perspectives, drawing on my work in AI policy and governance, as well as prior research on agents, agent organizations, formal models, and decision-making frameworks. Recent advancements are reshaping AI not just as a technology but as a socio-technical process that functions in dynamic, multi-stakeholder environments. As such, addressing accountability, normative reasoning, and value alignment requires a multidisciplinary approach. A central focus of this talk is the role of governance structures, regulatory mechanisms, and institutional oversight in ensuring AI remains both trustworthy and adaptable. Drawing on recent AI policy research, I will examine strategies for embedding ethical constraints in AI design, the role of explainability in agent decision-making, and how multi-agent coordination informs regulatory compliance. Rather than viewing regulation as a barrier, will show that responsible governance is an enabler of sustainable innovation, driving public trust, business differentiation, and long-term technological progress. By integrating insights from agent-based modeling, AI policy frameworks, and governance strategies, this talk underscores the importance of designing AI systems that are both socially responsible and technically robust. Ultimately, ensuring AI serves the common good requires a multidisciplinary approach—one that combines formal models, ethical considerations, and adaptive policy mechanisms to create AI systems that are accountable, fair, and aligned with human values.

More information is available on the website of the Munich AI Lecture. This is the flagship speaker series about AI in Munich, co-organized by relAI

Save the date!

We proudly invite you to our next Munich AI Lecture. This is the flagship speaker series about AI in Munich, co-organized by relAI.

 Event Details:

  • Speaker: Prof. Michael Mahoney (UC Berkeley)
  • Title: Foundational Methods for Foundation Models for Scientific Machine Learning
  • Date and Time: March 26, 2025 14:00 CET
  • Location: Lecture Hall W201, Professor-Huber-Platz 2, LMU Munich, 80539 Munich (Metro U3/U6 Universität, Exit B) LMU Room Finder

Abstract

The remarkable successes of ChatGPT in natural language processing (NLP) and related developments in computer vision (CV) motivate the question of what foundation models would look like and what new advances they would enable, when built on the rich, diverse, multimodal data that are available from large-scale experimental and simulational data in scientific computing (SC), broadly defined. Such models could provide a robust and principled foundation for scientific machine learning (SciML), going well beyond simply using ML tools developed for internet and social media applications to help solve future scientific problems. Prof. Mahoney will describe recent work demonstrating the potential of the "pre-train and fine-tune" paradigm, widely-used in CV and NLP, for SciML problems, demonstrating a clear path towards building SciML foundation models; as well as recent work highlighting multiple "failure modes" that arise when trying to interface data-driven ML methodologies with domain-driven SC methodologies, demonstrating clear obstacles to traversing that path successfully. Prof. Mahoney will also describe initial work on developing novel methods to address several of these challenges, as well as their implementations at scale, a general solution to which will be needed to build robust and reliable SciML models consisting of millions or billions or trillions of parameters.

Bio of the speaker

Michael W. Mahoney is at the University of California at Berkeley in the Department of Statistics and at the International Computer Science Institute (ICSI). He is also an Amazon Scholar as well as head of the Machine Learning and Analytics Group at the Lawrence Berkeley National Laboratory. He works on algorithmic and statistical aspects of modern large-scale data analysis. Much of his recent research has focused on large-scale machine learning, including randomized matrix algorithms and randomized numerical linear algebra, scientific machine learning, scalable stochastic optimization, geometric network analysis tools for structure extraction in large informatics graphs, scalable implicit regularization methods, computational methods for neural network analysis, physics informed machine learning, and applications in genetics, astronomy, medical imaging, social network analysis, and internet data analysis. He received his PhD from Yale University with a dissertation in computational statistical mechanics, and he has worked and taught at Yale University in the mathematics department, at Yahoo Research, and at Stanford University in the mathematics department. Among other things, he was on the national advisory committee of the Statistical and Applied Mathematical Sciences Institute (SAMSI), he was on the National Research Council's Committee on the Analysis of Massive Data, he co-organized the Simons Institute's fall 2013 and 2018 programs on the foundations of data science, he ran the Park City Mathematics Institute's 2016 PCMI Summer Session on The Mathematics of Data, he ran the biennial MMDS Workshops on Algorithms for Modern Massive Data Sets, and he was the Director of the NSF/TRIPODS-funded FODA (Foundations of Data Analysis) Institute at UC Berkeley. More information is available at https://www.stat.berkeley.edu/~mmahoney/ .

This event is open to everyone; registration is not required.

It is our great pleasure to announce the next Munich AI Lecture featuring Prof. Dr. Jean-Luc Starck, Director of Research and head of the CosmoStat laboratory at the Institute of Research into the Fundamental Laws of the Universe, Département d'Astrophysique, CEA-Saclay, France. The lecture is organized by relAI director Prof. Dr. Gitta Kutyniok, and co-hosted by Prof. Dr. Jochen Weller, with support of BAIOSPHERE, the Bavarian AI Network.

Event Details:

  • Speaker: Prof. Dr. Jean-Luc Starck
  • Title: Unveiling the Cosmos: Deep Learning Solutions to Inverse Problems in Astrophysics
  • Date and Time: Tuesday, 18. February 2025 from 17:00 pm to 18:30 pm
  • LocationSenatssaal, LMU Munich, Geschwister-Scholl-Platz 1, Munich 

Prof. Starck will speak about how inverse problems in astrophysics, such as image reconstruction or gravitational lensing data analysis, have traditionally relied on sparsity-based techniques to recover underlying physical structures from incomplete or noisy data. Deep learning methods are now replacing these classical approaches, offering unprecedented performance gains in accuracy and efficiency. Despite their success, deep learning methods introduce new challenges, including interpretability, generalization across diverse astrophysical scenarios, and robustness to observational biases. In this talk, the speaker will explore the transition from sparsity-driven methods to deep learning-based solutions, highlighting both the opportunities and pitfalls of this paradigm shift. Prof. Starck will discuss recent developments, applications to astrophysical data, and future directions for addressing the emerging challenges in this rapidly evolving field.

To read more information about the event and the speaker, visit this weblink.

relAI is a co-organiser of the Munich AI lectures. Find more info on this and other upcoming events on the Munich AI lectures home page.

This month, a team of 13 talented master and PhD students from our graduate school in reliable AI (relAI) showcased their quantitative skills and teamwork in an exciting estimation competition. The participants had 30 minutes to work on 13 estimation challenges, such as "What is the average discharge of the Isar when it meets the Donau in m^3/s?"

The spirit of competition and learning was truly inspiring. Check out the photo of our team, proudly representing relAI.