relAI MSc call 2026

📢 We are excited to announce that the call for applications to the MSc program 2026 of our Konrad Zuse School of Excellence in Reliable AI (relAI) is now open!

The innovative relAI MSc program is an addition to the MSc program at TUM or LMU, offering a cross-sectional training for successful education in AI. It provides a coherent, yet flexible and personalized training by enhancing scientific knowledge, professional development courses, and industrial exposure.  

Funded applicants will receive a scholarship of up to 992 EUR (depending on independent income). They are further supported by travel grants, e.g., for home travel.  

We highly encourage you to apply if you have: 

📆 Application Deadline: 15 June 2026 (23:59 AOE)

🔗 Apply now: https://zuseschoolrelai.de/application/#MSc-Program-Application

Please help us in spreading the word, especially to excellent international candidates.

relAI research will be featured at the International Conference on Learning Representations (ICLR), which will take place this year at the Riocentro Convention and Event Center in Rio de Janeiro, Brazil, from April 23rd to 27th, 2026. ICLR is one of the leading conferences with significant impact and reputation in machine learning and artificial intelligence research.

relAI Publications at ICLR

Meet relAI Students

If you attend ICLR, be sure to take the opportunity to discuss relAI research with relAI students attending the conference: Sarah Ball, Cecilia Casolo, Lukas Gosch, Valentyn Melnychuk, Ole Petersen, Yusuf Sale, Yan Scholten, Jonas von Berg, and Jingpei Wu. You can find their research papers in the list below.

Full list of relAI publications at ICLR 2026:

    Main Track


  1. Efficient Credal Prediction through Decalibration
    Paul Hofman, Timo Löhr, Maximilian Muschalik, Yusuf Sale, Eyke Hüllermeier
  2. Discrete Bayesian Sample Inference for Graph Generation
    Ole Petersen, Marcel Kollovieh, Marten Lienen, Stephan Günnemann
  3. Identifiability Challenges in Sparse Linear Ordinary Differential Equations
    Cecilia Casolo, Sören Becker, Niki Kilbertus
  4. Sampling-aware Adversarial Attacks Against Large Language Models
    Tim Beyer, Yan Scholten, Leo Schwinn, Stephan Günnemann
  5. Model Collapse Is Not a Bug but a Feature in Machine Unlearning for LLMs
    Yan Scholten, Sophie Xhonneux, Leo Schwinn, Stephan Günnemann
  6. Efficient and Sharp Off-Policy Learning under Unobserved Confounding
    Konstantin Hess, Dennis Frauen, Valentyn Melnychuk, Stefan Feuerriegel
  7. Overlap-Adaptive Regularization for Conditional Average Treatment Effect Estimation
    Valentyn Melnychuk, Dennis Frauen, Jonas Schweisthal, Stefan Feuerriegel
  8. GDR-learners: Orthogonal Learning of Generative Models for Potential Outcomes
    Valentyn Melnychuk, Stefan Feuerriegel
  9. IGC-Net for conditional average potential outcome estimation over time
    Konstantin Hess, Dennis Frauen, Valentyn Melnychuk, Stefan Feuerriegel
  10. On the Impossibility of Separating Intelligence from Judgment: The Computational Intractability of Filtering for AI Alignment
    Sarah Ball, Greg Gluch, Shafi Goldwasser, Frauke Kreuter, Omer Reingold, Guy N. Rothblum
  11. Foundation Models for Causal Inference via Prior-Data Fitted Networks
    Yuchen Ma, Dennis Frauen, Emil Javurek, Stefan Feuerriegel
  12. An Orthogonal Learner for Individualized Outcomes in Markov Decision Processes
    Emil Javurek, Valentyn Melnychuk, Jonas Schweisthal, Konstantin Hess, Dennis Frauen, Stefan Feuerriegel
  13. The Price of Robustness: Stable Classifiers Need Overparameterization
    Jonas von Berg, Adalbert Fono, Massimiliano Datres, Sohir Maskey, Gitta Kutyniok

    Journal Track


  1. Adversarial Robustness of Graph Transformers
    Philipp Foth, Simon Geisler, Lukas Gosch, Leo Schwinn, Stephan Günnemann
    Transactions on Machine Learning Research (TMLR), Journal Track Poster - ICLR 2026, 2025
  2. Online Selective Conformal Prediction: Errors and Solutions
    Yusuf Sale, Aaditya Ramdas
    Transactions on Machine Learning Research (TMLR), Journal Track Poster - ICLR 2026, 2025

    Workshops

  1. Exact Certification of Neural Networks and Partition Aggregation Ensembles against Label Poisoning
    Ajinkya Mohgaonkar, Lukas Gosch, Mahalakshmi Sabanayagam, Debarghya Ghoshdastidar, Stephan Günnemann
    ICLR 2026 Workshop on Principled Design for Trustworthy AI
  2. ProcessThinker: Enhancing Multi-modal Large Language Models Reasoning via Rollout-based Process Reward
    Jingpei Wu, Xiao Han, Weixiang Shen, Boer Zhang, Zifeng Ding, Volker Tresp
    ICLR 2026 Workshop on Logical Reasoning of Large Language Models

We are happy to announce that Barbara Plank has joined relAI as a Fellow!

Barbara is Full Professor for AI and Computational Linguistics at LMU Munich, where she holds the Chair in AI & Computational Linguistics and co-directs the Center for Information and Language Processing (CIS). She also serves as Head of the Munich AI & NLP lab (MaiNLP) and visiting Professorship at the IT University of Copenhagen

Her research on robustness, domain shift, and human label variation aligns well with relAI’s Algorithmic Decision Making research area. This work explores how AI systems learn and make decisions in the face of uncertainty and disagreement. Additionally, her emphasis on interpretability, reasoning, and trustworthy evaluation provides essential foundations for developing reliable, fair, and transparent algorithmic decision systems.

At relAI, Barbara will contribute by participating in seminars, workshops, and panels, as well as offering career advice.

A warm welcome! 🤝

🎉 relAI is proud to announce that the International Association for Dental, Oral, and Craniofacial Research (IADR) has named relAI Fellow Falk Schwendicke as the recipient of the 2026 IADR Distinguished Scientist William H. Bowen Research in Dental Caries Award 🦷 .

IADR is a nonprofit organization dedicated to advancing dental, oral, and craniofacial research for global health and well-being. This esteemed IADR award recognizes exceptional and innovative contributions to our understanding of caries etiology and the prevention of dental caries. It is one of the 17 IADR Distinguished Scientist Awards and is considered one of the highest honors bestowed by the organization.

Falk Schwendicke’s Research Achievements

His early work focused on minimally invasive and evidence-based caries management, particularly regarding selective carious tissue removal and its economic evaluation. This research has laid the groundwork for contemporary treatment guidelines. His recent studies have increasingly emphasized the integration of emerging technologies to overcome challenges in caries detection and management. Notably, he has been a pioneer in employing advanced artificial intelligence (AI) applications for radiographic analysis, diagnostic support, and predictive modeling.

A significant achievement in his career was leading a randomized controlled trial that evaluated AI-assisted caries detection. This study set new standards for clinical research in the field and informed subsequent cost-effectiveness analyses. Schwendicke also participates in numerous editorial and review roles and has presented at the IADR General Session and various scientific meetings. He has authored over 500 peer-reviewed publications and 30 book chapters and is ranked among the top 1% of most-cited dental researchers worldwide, according to the Stanford global ranking.

👉 Information sources

https://www.iadr.org/about/news-reports/press-releases/falk-schwendicke-named-recipient-2026-iadr-distinguished

https://www.linkedin.com/posts/prof-dr-falk-schwendicke-9bb6271a1_iadr2026-activity-7444709023079350272-kiFG/?utm_source=share&utm_medium=member_ios&rcm=ACoAAAMg4egBnT-dMw4VyJR7tdTe0Z-9xhGUZZI

Welcome on board 🛳️ !

relAI Fellow Carsten Marr is Professor for AI in Cell Therapy and Hematology at the Medical Faculty and Clinics of the Ludwig-Maximilians-Universität München, as well as Director of the Institute of AI for Health at Helmholtz Munich.

In recent years, he has made significant contributions to AI-based hematological cytology. His focus on the interpretability of models trained on patient data to make predictions in a biomedical context 🩺 closely aligns with relAI's central themes of safety and responsibility. His innovative multiple instance learning models facilitate the investigation of relevant cells for disease prediction, while sparse autoencoders help correlate image features with diagnostic concepts. Additionally, his work on linking images and language enables direct comparisons between understandable human terms and cellular patterns within gigabyte-sized digital scans. At relAI, he will support students through lectures, mentoring, and participation in events.

🎉 Congratulations!

relAI is thrilled to announce that Frauke Kreuter, relAI Fellow and member of the relAI Steering Committee, has been elected a Fellow of the American Association for the Advancement of Science (AAAS). AAAS is the world's largest general scientific society and publisher of the journal Science. Founded in 1848, this non-profit international organization promotes scientific freedom, responsibility, education, and collaboration to improve humanity, serving over 120,000 members.

Being elected as a Fellow is a prestigious honor that recognizes individuals whose contributions to advancing science and its applications in service to society have distinguished them among their peers and colleagues.

👉 More Information: https://www.lmu.de/ai-hub/en/news-events/all-news/news/prof.-dr.-frauke-kreuter-wins-2026-waksberg-award.html


🎉 Congratulations to the relAI PhD student Johanna Topalis and relAI Fellow Prof. Michael Ingrisch!

🏆 The article they co-authored, “ChatGPT makes medicine easy to swallow: an exploratory case study on simplified radiology reports, has been awarded the Most Cited Article in European Radiology (Impact Factor 2024) by the European Society of Radiology! The work was presented at the European Congress of Radiology (ECR) 2026 in Vienna and honoured by the Editor-in-Chief of European Radiology, Prof. Bernd Hamm.

📖 The article presents the first exploratory case study evaluating the quality of simplified radiology reports generated by the large language model (LLM) ChatGPT. Radiologists rated the reports as generally high quality but also identified errors that could lead to harmful patient interpretations. The findings highlight both the potential and the limitations of early large language models in clinical communication: while simplified reports can enhance accessibility, medical expert supervision and domain-specific adaptation are vital to ensure patient safety.

💡 The study, first published as a preprint in December 2022, was among the earliest scientific assessments of ChatGPT's ability to simplify radiology reports for patients. Since then, a rapidly growing body of research has explored the role of large language models in medical text simplification.

👉 Publication: https://link.springer.com/article/10.1007/s00330-023-10213-1

      Preprint: https://arxiv.org/abs/2212.14882


We are thrilled to welcome Majid Khadiv as a Fellow at relAI!✨ He is an Assistant Professor at the School of Computation, Information and Technology (CIT) of the Technical University of Munich (TUM), where he holds the Chair of AI Planning in Dynamic Environments, and is Principal Investigator at the Munich Institute of Robotics and Machine Intelligence (MIRMI).

His lab focuses on the fundamental question of how to develop a scalable approach to building intelligent humanoid robots while also providing formal safety guarantees for reliable deployment in our daily lives. This research direction aligns with relAI's goal of creating safe and secure AI made in Germany. Moreover, his work on ethics in robotics 🤖 complements relAI's mission by emphasizing the importance of ethical considerations in the development of reliable AI.

As a fellow, he will contribute to the relAI curriculum by delivering lectures to students and helping them gain practical experience through internships.

A warm welcome! 🤝

On March 10, relAI students had the privilege of hosting Dr. Sebastian Hallensleben, Chief Trust Officer at the relAI Industry Partner Resaro, as an invited speaker at the relAI student seminar. This seminar serves as an important platform that fosters valuable research exchanges and networking opportunities for our students.

Dr. Hallensleben is an expert at the intersection of AI research, regulation, and industry. He plays a significant role in developing AI standards for Europe as the Chair of CEN-CENELEC JTC 21, where European AI standards are being crafted to support EU regulations. Additionally, he co-chairs the AI risk and accountability initiatives at the Organisation for Economic Co-operation and Development (OECD).

About the Talk

In his talk, he shared valuable insights on the landscape of international AI standards and their development. The first half of the session focused on the EU AI Act, detailing how the currently developing landscape of harmonised standards will provide the technical basis for legal compliance. Moving from regulation to practice, he was joined by Linus Stach to demonstrate how Resaro interacts with this landscape in the development of their AI evaluation platform, navigating the complexity of accurately communicating technical metrics to a wide audience of stakeholders. The speakers then demonstrated their evaluation framework using a case study based on public crime statistics from Baden-Württemberg. They showed how the framework can be used to assess model performance dimensions (such as privacy, consistency and correctness), compare different models, and ensure compliant application. The seminar concluded with an extensive and detailed discussion on the practical challenges of defining and achieving AI reliability in real-world scenarios.

More about the Speaker

Sebastian is the initiator and Programme Chair of the Digital Trust Convention and is Principal Advisor Digital Trust at KI Park. As Chief Trust Officer at Resaro, he works towards drilling down to ground truths about capabilities of AI systems. - Previously, Sebastian Hallensleben headed Digitalisation and Artificial Intelligence at VDE Association for Electrical, Electronic and Information Technologies. He focuses in particular on operationalising AI ethics, on characterizing AI quality and on building privacy-preserving trust infrastructures for a more resilient digital space.

🙌 We warmly welcome Valentin Hofmann, an incoming tenure-track assistant professor at LMU Munich in Information and Language Processing using AI methods.

Valentin Hofmann's research lies at the intersection of AI, natural language processing, and computational social science. A primary focus of his work is to enhance the robustness, safety, and fairness of large language models, particularly regarding social biases and their implications for reliable AI.

His studies on large language models are relevant to the relAI Research Area of 🤖 Robotics and Interactive Systems, as these models increasingly serve as essential components of interactive, human-facing AI systems, such as conversational assistants, where reliability is crucial. Furthermore, his research directly aligns with the relAI Central Themes of Safety and Responsibility by investigating and mitigating social biases and their potential risks in deployed AI language technologies. As a fellow, he will contribute to relAI through teaching, mentoring, and community activities.