Björn Ommer receives the Eduard Rhein Award

🥂Congratulations!

relAI Fellow Björn Ommer has been awarded the 2024 Eduard Rhein Award Technology Prize. The award ceremony took place on 24 May 2025 at the Flugwerft Schleißheim.

The Eduard Rhein Foundation has honoured Björn Ommer and his team, a group of pioneers in the field of artificial intelligence. Through open and efficient model architectures, the group has democratized access to generative AI. Their approach demonstrates the potential of generative AI not only for images, but also for other modalities such as audio and text, thus laying the foundation for a wide range of applications – from media production, where realistic or creative content is created for presentations, to prototyping in automotive design, to synthetic data to support diagnostics in medical research.

Follow this link to read the complete article.

We are excited to announce that Airbus, a key player in the global aerospace sector, and QuantCo, a leading data science and AI firm, have become industry partners of relAI.

The research areas of Airbus Central Research and Technology, the central research entity of the Airbus Group, align well with the relAI research areas algorithmic decision-making, and robotics & interacting systems. Furthermore, trustworthy AI, along with relAI’s central themes of safety and security, is essential for establishing a reliable foundation for the use of AI in Airbus's future products and services

In addition to their involvement in events and activities and supporting relAI students, Airbus brings fresh perspectives and potential application areas for AI. The aerospace sector demands a high level of safety and reliability, and this collaboration will contribute to stimulating the development of new research directions.

QuantCo, founded in 2016 by four Harvard and Stanford PhDs. has rapidly grown into a team of 200 data scientists, software engineers, and deep learners. With offices across the US, UK, Switzerland, and Germany (including one in Munich), QuantCo collaborates with market-leading companies in insurance, e-commerce, and automotive. The data science and AI company develops algorithms for, among others, pricing and claims management, and works on topics within the relAI research area medicine & healthcare.  QuantCo will support our students by offering valuable internship opportunities.

 

relAI warmly welcomes TUM Professor Alexander König to our school. Prof. König is the Interim Head of the Chair of Robotics and System Intelligence and the Scientific Lead of Project Geriatronics at the Technical University of Munich.

His work sits at the intersection of medicine & healthcare and robotics & interacting systems, directly addressing the central themes of our program: Safety, Security, Privacy, and Responsibility. He follows two main research directions: i. Translating AI-controlled robotic technology from laboratory to patient, to investigate the real-world effects of AI and robotics on healthcare and caregiving. ii. Using AI and robotics to understand how aging affects cognitive abilities and motor control in the elderly, with the aim of optimizing quality of life through technology.

He will contribute to relAI through lectures and workshops on medical technology and robotics, teaching young scientists the path toward real-world application of their ideas with patients. Additionally, his insights on conducting user studies, navigating CE certification processes, and developing commercialization strategies through startups will further enrich research at relAI.

We are excited to announce the next Munich AI Lecture featuring Prof. Virginia Dignum, a member of the relAI Scientific Advisory Board. She is Professor of Responsible Artificial Intelligence at Umeå University, Sweden, where she leads the AI Policy Lab. She is also senior advisor on AI policy to the Wallenberg Foundations and chair of the ACM’s Technology Policy Council.

 Event Details:

🔹Title: Responsible AI: Governance, Ethics, and Sustainable Innovation

🔹Date and Time: July 9, 2025 6:30 pm

🔹Location: Plenarsaal of the Bavarian Academy of Sciences and Humanities (BAdW), Alfons-Goppel-Straße 11, 80539 Munich

Abstract

As AI systems become increasingly autonomous and embedded in socio-technical environments, balancing innovation with social responsibility grows increasingly urgent. Multi-agent systems and autonomous agents offer valuable insights into decision-making, coordination, and adaptability, yet their deployment raises critical ethical and governance challenges. How can we ensure that AI aligns with human values, operates transparently, and remains accountable within complex social and economic ecosystems? This talk explores the intersection of AI ethics, governance, and agent-based perspectives, drawing on my work in AI policy and governance, as well as prior research on agents, agent organizations, formal models, and decision-making frameworks. Recent advancements are reshaping AI not just as a technology but as a socio-technical process that functions in dynamic, multi-stakeholder environments. As such, addressing accountability, normative reasoning, and value alignment requires a multidisciplinary approach. A central focus of this talk is the role of governance structures, regulatory mechanisms, and institutional oversight in ensuring AI remains both trustworthy and adaptable. Drawing on recent AI policy research, I will examine strategies for embedding ethical constraints in AI design, the role of explainability in agent decision-making, and how multi-agent coordination informs regulatory compliance. Rather than viewing regulation as a barrier, will show that responsible governance is an enabler of sustainable innovation, driving public trust, business differentiation, and long-term technological progress. By integrating insights from agent-based modeling, AI policy frameworks, and governance strategies, this talk underscores the importance of designing AI systems that are both socially responsible and technically robust. Ultimately, ensuring AI serves the common good requires a multidisciplinary approach—one that combines formal models, ethical considerations, and adaptive policy mechanisms to create AI systems that are accountable, fair, and aligned with human values.

More information is available on the website of the Munich AI Lecture. This is the flagship speaker series about AI in Munich, co-organized by relAI

We are excited to announce that the call for applications to the MSc program 2025 of our Konrad Zuse School of Excellence in Reliable AI (relAI) is now open!

The novel, innovative relAI MSc program is an addition to the MSc program at TUM or LMU, offering a cross-sectional training for successful education in AI. It provides a coherent, yet flexible and personalized training by enhancing scientific knowledge, professional development courses and industrial exposure.  

Funded applicants will receive a scholarship of up to 992 EUR (depending on independent income). They are further supported by travel grants, e.g., for home travel.  

We highly encourage you to apply if you have: 

  • an excellent bachelor’s degree in computer science, mathematics, engineering, natural sciences or other data science/machine learning/AI related disciplines;
  • a genuine interest in working on a topic of reliable AI covering aspects such as safety, security, privacy and responsibility in one relAI’s research areas Mathematical & Algorithmic foundations, Algorithmic Decision-Making, Medicine & Healthcare or Robotics & Interacting Systems;
  • certified proficiency in English.

📆 Application Deadline: 17 June 2025 (23:59 AOE)

🔗 Apply now: https://zuseschoolrelai.de/application/#MSc-Program-Application

Please help us in spreading the word, especially to excellent international candidates.

The excellent work of relAI students will be prominently represented at the Thirteenth International Conference on Learning Representations (ICLR) 2025, which will take place at the Singapore EXPO from 24 to 28 April 28 2025.

Thirteen publications from our students will be presented at the conference, nine of them in the main track. Notably, four out of these nine publications have been selected for Oral or Spotlight presentations. This is a significant achievement and demonstrates the high quality of relAI research, considering that only 15% of accepted papers are invited to give a talk.

If you plan to attend the conference, do not miss the opportunity to discuss these publications directly with some of our students. Be sure to attend the Oral presentation by Yan Scholten titled "A Probabilistic Perspective on Unlearning and Alignment for Large Language Models" on the 24th April. You can learn about Lisa Wimmer´s work, "Trust Me, I Know the Way: Predictive Uncertainty in the Presence of Shortcut Learning" at the Workshop on Spurious Correlation and Shortcut Learning: Foundations and Solutions. Additionally, check out the posters of Amine Ketata and Chengzhi Hu!.  Amine will be presenting his work on “Lift Your Molecules: Molecular Graph Generation in Latent Euclidean Space” and you can talk to Chengzhi Hu about “Surgical, Cheap, and Flexible: Mitigating False Refusal in Language Models via Single Vector Ablation”.

Full list of relAI publications at ICLR 2025:

    Oral Presentation - Main Track

  1. A Probabilistic Perspective on Unlearning and Alignment for Large Language Models
    Yan Scholten, Stephan Günnemann, Leo Schwinn
  2. Spotlight Presentation - Main Track

  3. Exact Certification of (Graph) Neural Networks Against Label Poisoning
    Mahalakshmi Sabanayagam, Lukas Gosch, Stephan Günnemann, Debarghya Ghoshdastidar
  4. Provably Reliable Conformal Prediction Sets in the Presence of Data Poisoning
    Yan Scholten, Stephan Günnemann
  5. Signature Kernel Conditional Independence Tests in Causal Discovery for Stochastic Processes
    Georg Manten, Cecilia Casolo, Emilio Ferrucci, Søren Wengel Mogensen, Cristopher Salvi, Niki Kilbertus
  6. Posters - Main Track

  7. Differentially private learners for heterogeneous treatment effects
    Maresa Schröder, Valentyn Melnychuk, Stefan Feuerriegel
  8. Surgical, Cheap, and Flexible: Mitigating False Refusal in Language Models via Single Vector Ablation
    Xinpeng Wang, Chengzhi Hu, Paul Röttger, Barbara Plank
  9. ParFam -- (Neural Guided) Symbolic Regression via Continuous Global Optimization
    Philipp Scholl, Katharina Bieker, Hillary Hauger, Gitta Kutyniok
  10. Lift Your Molecules: Molecular Graph Generation in Latent Euclidean Space
    Mohamed Amine Ketata, Nicholas Gao, Johanna Sommer, Tom Wollschläger, Stephan Günnemann
  11. Constructing Confidence Intervals for Average Treatment Effects from Multiple Datasets
    Yuxin Wang, Maresa Schröder, Dennis Frauen, Jonas Schweisthal, Konstantin Hess, Stefan Feuerriegel
  12. Workshops

  13. Trust Me, I Know the Way: Predictive Uncertainty in the Presence of Shortcut Learning
    Lisa Wimmer, Bernd Bischl, Ludwig Bothmann
    Workshop on Spurious Correlation and Shortcut Learning: Foundations and Solutions
  14. Privacy Amplification by Structured Subsampling for Deep Differentially Private Time Series Forecasting
    Jan Schuchardt, Mina Dalirrooyfard, Jed Guzelkabaagac, Anderson Schneider, Yuriy Nevmyvaka, Stephan Günnemann
    Workshop on Advances in Financial AI: Opportunities, Innovations and Responsible AI
  15. Cracking the Code: Evaluating Zero-Shot Prompting Methods for Providing Programming Feedback
    Niklas Ippisch, Anna-Carolina Haensch, Markus Herklotz, Jan Simson, Jacob Beck, Malte Schierholz
    Workshop on Human-AI Coevolution
  16. Exact Certification of (Graph) Neural Networks Against Label Poisoning
    Mahalakshmi Sabanayagam, Lukas Gosch, Stephan Günnemann, Debarghya Ghoshdastidar
    VerifAI: AI Verification in the Wild

    Graph Neural Networks for Enhancing Ensemble Forecasts of Extreme Rainfall
    Christopher Bülte, Sohir Maskey, Philipp Scholl, Jonas Berg, Gitta Kutyniok
    Workshop on Tackling Climate Change with Machine Learning

🥂Congratulations!

We are proud to announce that our relAI Director Gitta Kutyniok has been invited to become a member of the US National Academy of Artificial Intelligence (NAAI). NAAI is committed to advancing artificial intelligence by fostering collaboration among leading experts and promoting innovative research and development.

The election acknowledges Gitta Kutyniok's distinguished contributions to applied harmonic analysis, compressed sensing, and artificial intelligence. This honor recognizes her leadership as the Bavarian AI Chair for Mathematical Foundations of Artificial Intelligence at Ludwig-Maximilians-Universität München (LMU Munich), which has significantly advanced research and collaboration in these fields. Additionally, NAAI greatly appreciated her recognition as a SIAM Fellow in 2019 and an IEEE Fellow in 2024, which highlights her outstanding accomplishments and the high regard in which she is held by her peers. 

Save the date!

We proudly invite you to our next Munich AI Lecture. This is the flagship speaker series about AI in Munich, co-organized by relAI.

 Event Details:

  • Speaker: Prof. Michael Mahoney (UC Berkeley)
  • Title: Foundational Methods for Foundation Models for Scientific Machine Learning
  • Date and Time: March 26, 2025 14:00 CET
  • Location: Lecture Hall W201, Professor-Huber-Platz 2, LMU Munich, 80539 Munich (Metro U3/U6 Universität, Exit B) LMU Room Finder

Abstract

The remarkable successes of ChatGPT in natural language processing (NLP) and related developments in computer vision (CV) motivate the question of what foundation models would look like and what new advances they would enable, when built on the rich, diverse, multimodal data that are available from large-scale experimental and simulational data in scientific computing (SC), broadly defined. Such models could provide a robust and principled foundation for scientific machine learning (SciML), going well beyond simply using ML tools developed for internet and social media applications to help solve future scientific problems. Prof. Mahoney will describe recent work demonstrating the potential of the "pre-train and fine-tune" paradigm, widely-used in CV and NLP, for SciML problems, demonstrating a clear path towards building SciML foundation models; as well as recent work highlighting multiple "failure modes" that arise when trying to interface data-driven ML methodologies with domain-driven SC methodologies, demonstrating clear obstacles to traversing that path successfully. Prof. Mahoney will also describe initial work on developing novel methods to address several of these challenges, as well as their implementations at scale, a general solution to which will be needed to build robust and reliable SciML models consisting of millions or billions or trillions of parameters.

Bio of the speaker

Michael W. Mahoney is at the University of California at Berkeley in the Department of Statistics and at the International Computer Science Institute (ICSI). He is also an Amazon Scholar as well as head of the Machine Learning and Analytics Group at the Lawrence Berkeley National Laboratory. He works on algorithmic and statistical aspects of modern large-scale data analysis. Much of his recent research has focused on large-scale machine learning, including randomized matrix algorithms and randomized numerical linear algebra, scientific machine learning, scalable stochastic optimization, geometric network analysis tools for structure extraction in large informatics graphs, scalable implicit regularization methods, computational methods for neural network analysis, physics informed machine learning, and applications in genetics, astronomy, medical imaging, social network analysis, and internet data analysis. He received his PhD from Yale University with a dissertation in computational statistical mechanics, and he has worked and taught at Yale University in the mathematics department, at Yahoo Research, and at Stanford University in the mathematics department. Among other things, he was on the national advisory committee of the Statistical and Applied Mathematical Sciences Institute (SAMSI), he was on the National Research Council's Committee on the Analysis of Massive Data, he co-organized the Simons Institute's fall 2013 and 2018 programs on the foundations of data science, he ran the Park City Mathematics Institute's 2016 PCMI Summer Session on The Mathematics of Data, he ran the biennial MMDS Workshops on Algorithms for Modern Massive Data Sets, and he was the Director of the NSF/TRIPODS-funded FODA (Foundations of Data Analysis) Institute at UC Berkeley. More information is available at https://www.stat.berkeley.edu/~mmahoney/ .

This event is open to everyone; registration is not required.

The Roland Berger Foundation (RBS) and TUM have begun a collaboration to promote the AI skills of socially disadvantaged children and young people. RBS works with 70 partner schools throughout Germany to provide scholarships to talented primary school pupils from the second grade onwards from socially disadvantaged families.

relAI Fellow Enkeledja Kasneci is the scientific director of the project. The scholarship holders learn how to use AI responsibly and reflectively. AI tools are also being developed to better support children and young people with difficult starting conditions

For more information, please visit the websites of Roland Berger Foundation and TUM.

relAI warmly welcomes LMU Professor David Rügamer to our school. David heads the Data Science Group at LMU, and he is also a Principal Investigator at the Munich Center for Machine Learning (MCML).

Prof. Rügamer works on fundamental topics within the relAI research area Mathematical and Algorithmic Foundations applied to neural networks, such as symmetries, sparsity, and uncertainty quantification in deep neural networks. Additionally, his work is also relevant to the Algorithmic Decision-Making relAI research topic. relAI will benefit from his research experience and, furthermore, from his contributions to our Curriculum, including lectures..