relAI students at ICLR 2026

relAI research will be featured at the International Conference on Learning Representations (ICLR), which will take place this year at the Riocentro Convention and Event Center in Rio de Janeiro, Brazil, from April 23rd to 27th, 2026. ICLR is one of the leading conferences with significant impact and reputation in machine learning and artificial intelligence research.

relAI Publications at ICLR

Meet relAI Students

If you attend ICLR, be sure to take the opportunity to discuss relAI research with relAI students attending the conference: Sarah Ball, Cecilia Casolo, Lukas Gosch, Valentyn Melnychuk, Ole Petersen, Yusuf Sale, Yan Scholten, Jonas von Berg, and Jingpei Wu. You can find their research papers in the list below.

Full list of relAI publications at ICLR 2026:

    Main Track


  1. Efficient Credal Prediction through Decalibration
    Paul Hofman, Timo Löhr, Maximilian Muschalik, Yusuf Sale, Eyke Hüllermeier
  2. Discrete Bayesian Sample Inference for Graph Generation
    Ole Petersen, Marcel Kollovieh, Marten Lienen, Stephan Günnemann
  3. Identifiability Challenges in Sparse Linear Ordinary Differential Equations
    Cecilia Casolo, Sören Becker, Niki Kilbertus
  4. Sampling-aware Adversarial Attacks Against Large Language Models
    Tim Beyer, Yan Scholten, Leo Schwinn, Stephan Günnemann
  5. Model Collapse Is Not a Bug but a Feature in Machine Unlearning for LLMs
    Yan Scholten, Sophie Xhonneux, Leo Schwinn, Stephan Günnemann
  6. Efficient and Sharp Off-Policy Learning under Unobserved Confounding
    Konstantin Hess, Dennis Frauen, Valentyn Melnychuk, Stefan Feuerriegel
  7. Overlap-Adaptive Regularization for Conditional Average Treatment Effect Estimation
    Valentyn Melnychuk, Dennis Frauen, Jonas Schweisthal, Stefan Feuerriegel
  8. GDR-learners: Orthogonal Learning of Generative Models for Potential Outcomes
    Valentyn Melnychuk, Stefan Feuerriegel
  9. IGC-Net for conditional average potential outcome estimation over time
    Konstantin Hess, Dennis Frauen, Valentyn Melnychuk, Stefan Feuerriegel
  10. On the Impossibility of Separating Intelligence from Judgment: The Computational Intractability of Filtering for AI Alignment
    Sarah Ball, Greg Gluch, Shafi Goldwasser, Frauke Kreuter, Omer Reingold, Guy N. Rothblum
  11. Foundation Models for Causal Inference via Prior-Data Fitted Networks
    Yuchen Ma, Dennis Frauen, Emil Javurek, Stefan Feuerriegel
  12. An Orthogonal Learner for Individualized Outcomes in Markov Decision Processes
    Emil Javurek, Valentyn Melnychuk, Jonas Schweisthal, Konstantin Hess, Dennis Frauen, Stefan Feuerriegel
  13. The Price of Robustness: Stable Classifiers Need Overparameterization
    Jonas von Berg, Adalbert Fono, Massimiliano Datres, Sohir Maskey, Gitta Kutyniok

    Journal Track


  1. Adversarial Robustness of Graph Transformers
    Philipp Foth, Simon Geisler, Lukas Gosch, Leo Schwinn, Stephan Günnemann
    Transactions on Machine Learning Research (TMLR), Journal Track Poster - ICLR 2026, 2025
  2. Online Selective Conformal Prediction: Errors and Solutions
    Yusuf Sale, Aaditya Ramdas
    Transactions on Machine Learning Research (TMLR), Journal Track Poster - ICLR 2026, 2025

    Workshops

  1. Exact Certification of Neural Networks and Partition Aggregation Ensembles against Label Poisoning
    Ajinkya Mohgaonkar, Lukas Gosch, Mahalakshmi Sabanayagam, Debarghya Ghoshdastidar, Stephan Günnemann
    ICLR 2026 Workshop on Principled Design for Trustworthy AI
  2. ProcessThinker: Enhancing Multi-modal Large Language Models Reasoning via Rollout-based Process Reward
    Jingpei Wu, Xiao Han, Weixiang Shen, Boer Zhang, Zifeng Ding, Volker Tresp
    ICLR 2026 Workshop on Logical Reasoning of Large Language Models

We are excited to announce the upcoming relAI symposium, “Shaping Science, Industry, and Society: Responsible Transformation through Generative AI,” dedicated to the latest developments and future prospects in the field of reliable artificial intelligence. The event aims to promote exchange between science, politics, and industry.

📅 Date and Time: 24 November, 09:00 - 13:00 pm

📍 Location: Haus der Bayerischen Wirtschaft

Program highlights:

  • Welcoming remarks from high-ranking representatives from politics and science, including State Secretary Dr. Rolf-Dieter Jungk (BMFTR, virtual) and the presidents of our two universities, Prof. Thomas Hofmann (TUM) and Prof. Matthias Tschöp (LMU).
  • High-caliber panel discussion with:  Dr. Philipp Baaske (LMU, VP Entrepreneurship), Prof. Claudia Eckert (acatech, President), Anna Kopp (Microsoft Digital Germany, CIO/CDO), and Maria Sievert (inveox, Founder).

Agenda and Registration

The complete agenda and the link to register can be found at the event URL.

❗Tickets are available free of charge, but subject to availability. You will receive a confirmation if your registration was successful.

Education is a crucial societal priority and a strategic focus for the application of reliable AI. To address this, relAI has introduced a new research area: Learning & Instruction. This initiative will be led by relAI fellows Prof. Jochen Kuhn from LMU and Prof. Enkeledja Kasneci from TUM, both of whom are experts in educational technology. 

Learning & Instruction focuses on exploring how reliable AI can be used to transform education in meaningful and responsible ways. It investigates the potential of intelligent tutoring systems, adaptive feedback, and digital learning assistants to personalize learning paths and provide targeted support. At the same time, it examines the broader effects of AI on teaching and learning: how AI systems shape learner motivation, teacher roles, and the dynamics of human-AI collaboration. 

By bringing together expertise from artificial intelligence, learning sciences, and educational research, Learning & Instruction aims to develop robust and trustworthy AI applications that not only advance technology, but also serve pedagogical goals and democratic values. .

From July 29 to August 1, 2025, relAI was delighted to welcome a group of Chinese students for the first relAI International Summer School, which took place at TUM and LMU.

Throughout the four days, various relAI Fellows and PhD students provided comprehensive insights into the four relAI research areas: mathematical and algorithmic foundations, medicine & healthcare, robotics & interacting systems, and algorithmic decision-making. The school promoted an educational exchange focused on the development of reliable AI, with the goal of exploring current trends in AI development

 

The excellent work of relAI students will be prominently represented at the Thirteenth International Conference on Learning Representations (ICLR) 2025, which will take place at the Singapore EXPO from 24 to 28 April 28 2025.

Thirteen publications from our students will be presented at the conference, nine of them in the main track. Notably, four out of these nine publications have been selected for Oral or Spotlight presentations. This is a significant achievement and demonstrates the high quality of relAI research, considering that only 15% of accepted papers are invited to give a talk.

If you plan to attend the conference, do not miss the opportunity to discuss these publications directly with some of our students. Be sure to attend the Oral presentation by Yan Scholten titled "A Probabilistic Perspective on Unlearning and Alignment for Large Language Models" on the 24th April. You can learn about Lisa Wimmer´s work, "Trust Me, I Know the Way: Predictive Uncertainty in the Presence of Shortcut Learning" at the Workshop on Spurious Correlation and Shortcut Learning: Foundations and Solutions. Additionally, check out the posters of Amine Ketata and Chengzhi Hu!.  Amine will be presenting his work on “Lift Your Molecules: Molecular Graph Generation in Latent Euclidean Space” and you can talk to Chengzhi Hu about “Surgical, Cheap, and Flexible: Mitigating False Refusal in Language Models via Single Vector Ablation”.

Full list of relAI publications at ICLR 2025:

    Oral Presentation - Main Track

  1. A Probabilistic Perspective on Unlearning and Alignment for Large Language Models
    Yan Scholten, Stephan Günnemann, Leo Schwinn
  2. Spotlight Presentation - Main Track

  3. Exact Certification of (Graph) Neural Networks Against Label Poisoning
    Mahalakshmi Sabanayagam, Lukas Gosch, Stephan Günnemann, Debarghya Ghoshdastidar
  4. Provably Reliable Conformal Prediction Sets in the Presence of Data Poisoning
    Yan Scholten, Stephan Günnemann
  5. Signature Kernel Conditional Independence Tests in Causal Discovery for Stochastic Processes
    Georg Manten, Cecilia Casolo, Emilio Ferrucci, Søren Wengel Mogensen, Cristopher Salvi, Niki Kilbertus
  6. Posters - Main Track

  7. Differentially private learners for heterogeneous treatment effects
    Maresa Schröder, Valentyn Melnychuk, Stefan Feuerriegel
  8. Surgical, Cheap, and Flexible: Mitigating False Refusal in Language Models via Single Vector Ablation
    Xinpeng Wang, Chengzhi Hu, Paul Röttger, Barbara Plank
  9. ParFam -- (Neural Guided) Symbolic Regression via Continuous Global Optimization
    Philipp Scholl, Katharina Bieker, Hillary Hauger, Gitta Kutyniok
  10. Lift Your Molecules: Molecular Graph Generation in Latent Euclidean Space
    Mohamed Amine Ketata, Nicholas Gao, Johanna Sommer, Tom Wollschläger, Stephan Günnemann
  11. Constructing Confidence Intervals for Average Treatment Effects from Multiple Datasets
    Yuxin Wang, Maresa Schröder, Dennis Frauen, Jonas Schweisthal, Konstantin Hess, Stefan Feuerriegel
  12. Workshops

  13. Trust Me, I Know the Way: Predictive Uncertainty in the Presence of Shortcut Learning
    Lisa Wimmer, Bernd Bischl, Ludwig Bothmann
    Workshop on Spurious Correlation and Shortcut Learning: Foundations and Solutions
  14. Privacy Amplification by Structured Subsampling for Deep Differentially Private Time Series Forecasting
    Jan Schuchardt, Mina Dalirrooyfard, Jed Guzelkabaagac, Anderson Schneider, Yuriy Nevmyvaka, Stephan Günnemann
    Workshop on Advances in Financial AI: Opportunities, Innovations and Responsible AI
  15. Cracking the Code: Evaluating Zero-Shot Prompting Methods for Providing Programming Feedback
    Niklas Ippisch, Anna-Carolina Haensch, Markus Herklotz, Jan Simson, Jacob Beck, Malte Schierholz
    Workshop on Human-AI Coevolution
  16. Exact Certification of (Graph) Neural Networks Against Label Poisoning
    Mahalakshmi Sabanayagam, Lukas Gosch, Stephan Günnemann, Debarghya Ghoshdastidar
    VerifAI: AI Verification in the Wild

    Graph Neural Networks for Enhancing Ensemble Forecasts of Extreme Rainfall
    Christopher Bülte, Sohir Maskey, Philipp Scholl, Jonas Berg, Gitta Kutyniok
    Workshop on Tackling Climate Change with Machine Learning

Are you interested in frontier AI systems, their astonishing capabilities and risks for humanity? Then join us for a thought-provoking deep dive and exclusive OpenAI Live Q&A on AI safety. 

  • Date: Wednesday, May 8th, 2024 | 19:00 – 20:30 
  • Location: Room B006, Department of Mathematics (Theresienstr. 39) or online 
  • Language: English 

Agenda: 

  • 19:00 – 19:05: Doors open 
  • 19:05 – 19:30: Introduction to AI Safety 
  • 19:30 – 20:15: Presentation & Live Q&A with OpenAI researcher Jan H. Kirchner, co-author of weak-to-strong generalization paper 
  • 20:15 – 20:30: Closing talk – What can we do? 
  • 20:30 – onward: Optional socializing and small group discussions with free drinks and snacks. 

Please register on the following webpage and prepare your questions! 

We are honored to announce that the Zuse Schools of Excellence in AI have been featured in the AI action plan of the German Federal Ministry of Education and Research ("BMBF-Aktionsplan Künstliche Intelligenz")! 

This strategic plan aims to significantly advance AI research, development, and application in Germany. It emphasizes strengthening Germany's position in the global AI landscape. This is done by enhancing research infrastructure and transfer, facilitating AI integration into various sectors such as healthcare, fostering a dialogue on the societal and ethical implications of AI and ensuring AI competencies at all levels in the long term. Notably, the BMBF has committed over 1.6 billion Euros to this AI action plan.  

Being explicitly named in this influential plan highlights the dedication of the three Zuse Schools including relAI, to fostering ethical, reliable AI innovations and preparing the next generation of AI leaders. 

A huge thank you to our supporters, partners, fellows, and students for making this milestone possible!