Publications of relAI students at ICLR 2025

The excellent work of relAI students will be prominently represented at the Thirteenth International Conference on Learning Representations (ICLR) 2025, which will take place at the Singapore EXPO from 24 to 28 April 28 2025.

Twelve publications from our students will be presented at the conference, nine of them in the main track. Notably, four out of these nine publications have been selected for Oral or Spotlight presentations. This is a significant achievement and demonstrates the high quality of relAI research, considering that only 15% of accepted papers are invited to give a talk.

If you plan to attend the conference, do not miss the opportunity to discuss these publications directly with some of our students. Be sure to attend the Oral presentation by Yan Scholten titled "A Probabilistic Perspective on Unlearning and Alignment for Large Language Models" on the 24th April. You can learn about Lisa Wimmer´s work, "Trust Me, I Know the Way: Predictive Uncertainty in the Presence of Shortcut Learning" at the Workshop on Spurious Correlation and Shortcut Learning: Foundations and Solutions. Additionally, check out the posters of Amine Ketata and Chengzhi Hu!.  Amine will be presenting his work on “Lift Your Molecules: Molecular Graph Generation in Latent Euclidean Space” and you can talk to Chengzhi Hu about “Surgical, Cheap, and Flexible: Mitigating False Refusal in Language Models via Single Vector Ablation”.

Full list of relAI publications at ICLR 2025:

    Oral Presentation - Main Track

  1. A Probabilistic Perspective on Unlearning and Alignment for Large Language Models
    Yan Scholten, Stephan Günnemann, Leo Schwinn
  2. Spotlight Presentation - Main Track

  3. Exact Certification of (Graph) Neural Networks Against Label Poisoning
    Mahalakshmi Sabanayagam, Lukas Gosch, Stephan Günnemann, Debarghya Ghoshdastidar
  4. Provably Reliable Conformal Prediction Sets in the Presence of Data Poisoning
    Yan Scholten, Stephan Günnemann
  5. Signature Kernel Conditional Independence Tests in Causal Discovery for Stochastic Processes
    Georg Manten, Cecilia Casolo, Emilio Ferrucci, Søren Wengel Mogensen, Cristopher Salvi, Niki Kilbertus
  6. Posters - Main Track

  7. Differentially private learners for heterogeneous treatment effects
    Maresa Schröder, Valentyn Melnychuk, Stefan Feuerriegel
  8. Surgical, Cheap, and Flexible: Mitigating False Refusal in Language Models via Single Vector Ablation
    Xinpeng Wang, Chengzhi Hu, Paul Röttger, Barbara Plank
  9. ParFam -- (Neural Guided) Symbolic Regression via Continuous Global Optimization
    Philipp Scholl, Katharina Bieker, Hillary Hauger, Gitta Kutyniok
  10. Lift Your Molecules: Molecular Graph Generation in Latent Euclidean Space
    Mohamed Amine Ketata, Nicholas Gao, Johanna Sommer, Tom Wollschläger, Stephan Günnemann
  11. Constructing Confidence Intervals for Average Treatment Effects from Multiple Datasets
    Yuxin Wang, Maresa Schröder, Dennis Frauen, Jonas Schweisthal, Konstantin Hess, Stefan Feuerriegel
  12. Workshops

  13. Trust Me, I Know the Way: Predictive Uncertainty in the Presence of Shortcut Learning
    Lisa Wimmer, Bernd Bischl, Ludwig Bothmann
    Workshop on Spurious Correlation and Shortcut Learning: Foundations and Solutions
  14. Privacy Amplification by Structured Subsampling for Deep Differentially Private Time Series Forecasting
    Jan Schuchardt, Mina Dalirrooyfard, Jed Guzelkabaagac, Anderson Schneider, Yuriy Nevmyvaka, Stephan Günnemann
    Workshop on Advances in Financial AI: Opportunities, Innovations and Responsible AI
  15. Cracking the Code: Evaluating Zero-Shot Prompting Methods for Providing Programming Feedback
    Niklas Ippisch, Anna-Carolina Haensch, Markus Herklotz, Jan Simson, Jacob Beck, Malte Schierholz
    Workshop on Human-AI Coevolution
  16. Exact Certification of (Graph) Neural Networks Against Label Poisoning
    Mahalakshmi Sabanayagam, Lukas Gosch, Stephan Günnemann, Debarghya Ghoshdastidar
    VerifAI: AI Verification in the Wild

🥂Congratulations!

We are proud to announce that our relAI Director Gitta Kutyniok has been invited to become a member of the US National Academy of Artificial Intelligence (NAAI). NAAI is committed to advancing artificial intelligence by fostering collaboration among leading experts and promoting innovative research and development.

The election acknowledges Gitta Kutyniok's distinguished contributions to applied harmonic analysis, compressed sensing, and artificial intelligence. This honor recognizes her leadership as the Bavarian AI Chair for Mathematical Foundations of Artificial Intelligence at Ludwig-Maximilians-Universität München (LMU Munich), which has significantly advanced research and collaboration in these fields. Additionally, NAAI greatly appreciated her recognition as a SIAM Fellow in 2019 and an IEEE Fellow in 2024, which highlights her outstanding accomplishments and the high regard in which she is held by her peers. 

Save the date!

We proudly invite you to our next Munich AI Lecture. This is the flagship speaker series about AI in Munich, co-organized by relAI.

 Event Details:

  • Speaker: Prof. Michael Mahoney (UC Berkeley)
  • Title: Foundational Methods for Foundation Models for Scientific Machine Learning
  • Date and Time: March 26, 2025 14:00 CET
  • Location: Lecture Hall W201, Professor-Huber-Platz 2, LMU Munich, 80539 Munich (Metro U3/U6 Universität, Exit B) LMU Room Finder

Abstract

The remarkable successes of ChatGPT in natural language processing (NLP) and related developments in computer vision (CV) motivate the question of what foundation models would look like and what new advances they would enable, when built on the rich, diverse, multimodal data that are available from large-scale experimental and simulational data in scientific computing (SC), broadly defined. Such models could provide a robust and principled foundation for scientific machine learning (SciML), going well beyond simply using ML tools developed for internet and social media applications to help solve future scientific problems. Prof. Mahoney will describe recent work demonstrating the potential of the "pre-train and fine-tune" paradigm, widely-used in CV and NLP, for SciML problems, demonstrating a clear path towards building SciML foundation models; as well as recent work highlighting multiple "failure modes" that arise when trying to interface data-driven ML methodologies with domain-driven SC methodologies, demonstrating clear obstacles to traversing that path successfully. Prof. Mahoney will also describe initial work on developing novel methods to address several of these challenges, as well as their implementations at scale, a general solution to which will be needed to build robust and reliable SciML models consisting of millions or billions or trillions of parameters.

Bio of the speaker

Michael W. Mahoney is at the University of California at Berkeley in the Department of Statistics and at the International Computer Science Institute (ICSI). He is also an Amazon Scholar as well as head of the Machine Learning and Analytics Group at the Lawrence Berkeley National Laboratory. He works on algorithmic and statistical aspects of modern large-scale data analysis. Much of his recent research has focused on large-scale machine learning, including randomized matrix algorithms and randomized numerical linear algebra, scientific machine learning, scalable stochastic optimization, geometric network analysis tools for structure extraction in large informatics graphs, scalable implicit regularization methods, computational methods for neural network analysis, physics informed machine learning, and applications in genetics, astronomy, medical imaging, social network analysis, and internet data analysis. He received his PhD from Yale University with a dissertation in computational statistical mechanics, and he has worked and taught at Yale University in the mathematics department, at Yahoo Research, and at Stanford University in the mathematics department. Among other things, he was on the national advisory committee of the Statistical and Applied Mathematical Sciences Institute (SAMSI), he was on the National Research Council's Committee on the Analysis of Massive Data, he co-organized the Simons Institute's fall 2013 and 2018 programs on the foundations of data science, he ran the Park City Mathematics Institute's 2016 PCMI Summer Session on The Mathematics of Data, he ran the biennial MMDS Workshops on Algorithms for Modern Massive Data Sets, and he was the Director of the NSF/TRIPODS-funded FODA (Foundations of Data Analysis) Institute at UC Berkeley. More information is available at https://www.stat.berkeley.edu/~mmahoney/ .

This event is open to everyone; registration is not required.

The Roland Berger Foundation (RBS) and TUM have begun a collaboration to promote the AI skills of socially disadvantaged children and young people. RBS works with 70 partner schools throughout Germany to provide scholarships to talented primary school pupils from the second grade onwards from socially disadvantaged families.

relAI Fellow Enkeledja Kasneci is the scientific director of the project. The scholarship holders learn how to use AI responsibly and reflectively. AI tools are also being developed to better support children and young people with difficult starting conditions

For more information, please visit the websites of Roland Berger Foundation and TUM.

relAI warmly welcomes LMU Professor David Rügamer to our school. David heads the Data Science Group at LMU, and he is also a Principal Investigator at the Munich Center for Machine Learning (MCML).

Prof. Rügamer works on fundamental topics within the relAI research area Mathematical and Algorithmic Foundations applied to neural networks, such as symmetries, sparsity, and uncertainty quantification in deep neural networks. Additionally, his work is also relevant to the Algorithmic Decision-Making relAI research topic. relAI will benefit from his research experience and, furthermore, from his contributions to our Curriculum, including lectures.. 

In this interview, relAI Fellow Daniel Rückert, recently awarded Germany’s highest research distinction, the Gottfried Wilhelm Leibniz Prize, shares his insights on the role of artificial intelligence (AI) systems in medicine.

Prof. Rückert discusses the significant potential of AI in early disease diagnosis, prevention, and personalized treatment, and explains his contributions to AI-assisted analysis of X-ray and MRI images, focusing on the detailed detection of abnormalities and the quick reconstruction of high-quality images. Notably, he emphasizes that reliability and explainability are essential aspects of AI systems in medicine and one of his research topics at relAI.

Follow this link to read the complete interview.

It is our great pleasure to announce the next Munich AI Lecture featuring Prof. Dr. Jean-Luc Starck, Director of Research and head of the CosmoStat laboratory at the Institute of Research into the Fundamental Laws of the Universe, Département d'Astrophysique, CEA-Saclay, France. The lecture is organized by relAI director Prof. Dr. Gitta Kutyniok, and co-hosted by Prof. Dr. Jochen Weller, with support of BAIOSPHERE, the Bavarian AI Network.

Event Details:

  • Speaker: Prof. Dr. Jean-Luc Starck
  • Title: Unveiling the Cosmos: Deep Learning Solutions to Inverse Problems in Astrophysics
  • Date and Time: Tuesday, 18. February 2025 from 17:00 pm to 18:30 pm
  • LocationSenatssaal, LMU Munich, Geschwister-Scholl-Platz 1, Munich 

Prof. Starck will speak about how inverse problems in astrophysics, such as image reconstruction or gravitational lensing data analysis, have traditionally relied on sparsity-based techniques to recover underlying physical structures from incomplete or noisy data. Deep learning methods are now replacing these classical approaches, offering unprecedented performance gains in accuracy and efficiency. Despite their success, deep learning methods introduce new challenges, including interpretability, generalization across diverse astrophysical scenarios, and robustness to observational biases. In this talk, the speaker will explore the transition from sparsity-driven methods to deep learning-based solutions, highlighting both the opportunities and pitfalls of this paradigm shift. Prof. Starck will discuss recent developments, applications to astrophysical data, and future directions for addressing the emerging challenges in this rapidly evolving field.

To read more information about the event and the speaker, visit this weblink.

relAI is a co-organiser of the Munich AI lectures. Find more info on this and other upcoming events on the Munich AI lectures home page.

This month, a team of 13 talented master and PhD students from our graduate school in reliable AI (relAI) showcased their quantitative skills and teamwork in an exciting estimation competition. The participants had 30 minutes to work on 13 estimation challenges, such as "What is the average discharge of the Isar when it meets the Donau in m^3/s?"

The spirit of competition and learning was truly inspiring. Check out the photo of our team, proudly representing relAI.

relAI warmly welcomes TUM Professors Lorenzo Masia and Bene Wiestler to our school. With the addition of these two excellent fellows, relAI will enhance its research areas “Robotics & Interacting Systems” and “Medicine & Healthcare”. 

Lorenzo Masia is a professor of “Intelligent BioRobotic Systems” and serves as the Deputy Director of the Munich Institute for Robotics and Machine Intelligence (MIRMI) at TUM. His research focuses on Rehabilitation Robotics and ExoSuits. His work involves developing reliable AI systems for human augmentation and assistance in medical contexts, which aligns perfectly with the mission of relA. 

Bene Wiestler is a professor of “AI for Image-Guided Diagnosis and Therapy” at the TUM School of Medicine and Health. His interdisciplinary approach merges medicine with machine learning, focusing on the research and application of advanced artificial intelligence models to tackle important clinical challenges. A key aspect of his work relevant to relAI is the development of safe and reliable AI models for medical applications. 

Multi-Head Attention has become ubiquitous in modern machine learning architectures, but how much efficiency can still be gained? This question was the focus of Dr. Maximilian Baust’s talk, "Beyond Transformers: Why Beating Multi-Head Attention is Hard."

In his presentation, Dr. Baust explored potential solutions for improving efficiency, ranging from implementation strategies and algorithmic modifications to new architectures, including spiking neural networks.

Dr. Maximilian Baust serves as Director of Solution Architecture Industries EMEA at NVIDIA and is also an industry mentor for one of relAI’s PhD students.

We extend our gratitude to Dr. Baust for sharing his insights and to our director, Gitta Kutyniok, for inviting him to relAI.