relAI MSc call 2025

We are excited to announce that the call for applications to the MSc program 2025 of our Konrad Zuse School of Excellence in Reliable AI (relAI) is now open!

The novel, innovative relAI MSc program is an addition to the MSc program at TUM or LMU, offering a cross-sectional training for successful education in AI. It provides a coherent, yet flexible and personalized training by enhancing scientific knowledge, professional development courses and industrial exposure.  

Funded applicants will receive a scholarship of up to 992 EUR (depending on independent income). They are further supported by travel grants, e.g., for home travel.  

We highly encourage you to apply if you have: 

  • an excellent bachelor’s degree in computer science, mathematics, engineering, natural sciences or other data science/machine learning/AI related disciplines;
  • a genuine interest in working on a topic of reliable AI covering aspects such as safety, security, privacy and responsibility in one relAI’s research areas Mathematical & Algorithmic foundations, Algorithmic Decision-Making, Medicine & Healthcare or Robotics & Interacting Systems;
  • certified proficiency in English.

📆 Application Deadline: 17 June 2025 (23:59 AOE)

🔗 Apply now: https://zuseschoolrelai.de/application/#MSc-Program-Application

Please help us in spreading the word, especially to excellent international candidates.

The excellent work of relAI students will be prominently represented at the Thirteenth International Conference on Learning Representations (ICLR) 2025, which will take place at the Singapore EXPO from 24 to 28 April 28 2025.

Thirteen publications from our students will be presented at the conference, nine of them in the main track. Notably, four out of these nine publications have been selected for Oral or Spotlight presentations. This is a significant achievement and demonstrates the high quality of relAI research, considering that only 15% of accepted papers are invited to give a talk.

If you plan to attend the conference, do not miss the opportunity to discuss these publications directly with some of our students. Be sure to attend the Oral presentation by Yan Scholten titled "A Probabilistic Perspective on Unlearning and Alignment for Large Language Models" on the 24th April. You can learn about Lisa Wimmer´s work, "Trust Me, I Know the Way: Predictive Uncertainty in the Presence of Shortcut Learning" at the Workshop on Spurious Correlation and Shortcut Learning: Foundations and Solutions. Additionally, check out the posters of Amine Ketata and Chengzhi Hu!.  Amine will be presenting his work on “Lift Your Molecules: Molecular Graph Generation in Latent Euclidean Space” and you can talk to Chengzhi Hu about “Surgical, Cheap, and Flexible: Mitigating False Refusal in Language Models via Single Vector Ablation”.

Full list of relAI publications at ICLR 2025:

    Oral Presentation - Main Track

  1. A Probabilistic Perspective on Unlearning and Alignment for Large Language Models
    Yan Scholten, Stephan Günnemann, Leo Schwinn
  2. Spotlight Presentation - Main Track

  3. Exact Certification of (Graph) Neural Networks Against Label Poisoning
    Mahalakshmi Sabanayagam, Lukas Gosch, Stephan Günnemann, Debarghya Ghoshdastidar
  4. Provably Reliable Conformal Prediction Sets in the Presence of Data Poisoning
    Yan Scholten, Stephan Günnemann
  5. Signature Kernel Conditional Independence Tests in Causal Discovery for Stochastic Processes
    Georg Manten, Cecilia Casolo, Emilio Ferrucci, Søren Wengel Mogensen, Cristopher Salvi, Niki Kilbertus
  6. Posters - Main Track

  7. Differentially private learners for heterogeneous treatment effects
    Maresa Schröder, Valentyn Melnychuk, Stefan Feuerriegel
  8. Surgical, Cheap, and Flexible: Mitigating False Refusal in Language Models via Single Vector Ablation
    Xinpeng Wang, Chengzhi Hu, Paul Röttger, Barbara Plank
  9. ParFam -- (Neural Guided) Symbolic Regression via Continuous Global Optimization
    Philipp Scholl, Katharina Bieker, Hillary Hauger, Gitta Kutyniok
  10. Lift Your Molecules: Molecular Graph Generation in Latent Euclidean Space
    Mohamed Amine Ketata, Nicholas Gao, Johanna Sommer, Tom Wollschläger, Stephan Günnemann
  11. Constructing Confidence Intervals for Average Treatment Effects from Multiple Datasets
    Yuxin Wang, Maresa Schröder, Dennis Frauen, Jonas Schweisthal, Konstantin Hess, Stefan Feuerriegel
  12. Workshops

  13. Trust Me, I Know the Way: Predictive Uncertainty in the Presence of Shortcut Learning
    Lisa Wimmer, Bernd Bischl, Ludwig Bothmann
    Workshop on Spurious Correlation and Shortcut Learning: Foundations and Solutions
  14. Privacy Amplification by Structured Subsampling for Deep Differentially Private Time Series Forecasting
    Jan Schuchardt, Mina Dalirrooyfard, Jed Guzelkabaagac, Anderson Schneider, Yuriy Nevmyvaka, Stephan Günnemann
    Workshop on Advances in Financial AI: Opportunities, Innovations and Responsible AI
  15. Cracking the Code: Evaluating Zero-Shot Prompting Methods for Providing Programming Feedback
    Niklas Ippisch, Anna-Carolina Haensch, Markus Herklotz, Jan Simson, Jacob Beck, Malte Schierholz
    Workshop on Human-AI Coevolution
  16. Exact Certification of (Graph) Neural Networks Against Label Poisoning
    Mahalakshmi Sabanayagam, Lukas Gosch, Stephan Günnemann, Debarghya Ghoshdastidar
    VerifAI: AI Verification in the Wild

    Graph Neural Networks for Enhancing Ensemble Forecasts of Extreme Rainfall
    Christopher Bülte, Sohir Maskey, Philipp Scholl, Jonas Berg, Gitta Kutyniok
    Workshop on Tackling Climate Change with Machine Learning

🎉The 8th edition of DataFest Germany will be held at Ludwig-Maximilians-Universität in Munich from 28 March to 30 March 2025. relAI is proud to support the event organization again this year. Additionally, a team of relAI students will participate in this exciting competition and networking opportunity.

The event is an annual data-driven competition, commonly referred to as a “hackathon,” that alternates between Mannheim and Munich. It is organized in collaboration with partners from industry and research institutions.

Datafest Germany is a celebration that follows upon the model DataFest™, organized by the American Statistical Association. The world's first DataFest took place at the University of California in 2011. Since then, many universities took up the DataFest format.

Learn more about DataFest Germany in this link.

This month, a team of 13 talented master and PhD students from our graduate school in reliable AI (relAI) showcased their quantitative skills and teamwork in an exciting estimation competition. The participants had 30 minutes to work on 13 estimation challenges, such as "What is the average discharge of the Isar when it meets the Donau in m^3/s?"

The spirit of competition and learning was truly inspiring. Check out the photo of our team, proudly representing relAI.

The recent relAI Collab Accelerator Workshop brought together researchers to share their work, explore new ideas, and identify potential collaborations. Here's a brief overview of the event:

The day began with the participants pitching their research topic from 9:00 to 11:00, followed by a coffee break until 11:15. After the break, participants engaged in one-to-one sessions until 14:00, followed by lunch, discussion, and feedback.

Participants contributed diverse topics in the field of Machine Learning and Artificial Intelligence. Mohamed Amine Ketata discussed Generative AI for Graphs, Max Beier presented on Learning Operator of Dynamical Systems, Richard Schwank explored Robust Aggregation through the Geometric Median, and Yurou Liang delved into Differentiable Learning for Causal Discovery.

The workshop was fertile ground for generating new research ideas and possible collaborations. During the one-to-one discussions, participants identified several projects for cooperation, such as principled modifications of loss functions to enhance robustness against outlier data rows.

Participants gained new insights into their research during the event. For example, one participant discovered a probabilistic approach to their forecasting issue without relying on a model. Another learned about structure learning as it applies to tabular data, which provided a temporal interpretation of the data. One researcher was challenged about the convexity of their problem. Discussions highlighted intriguing applications of median aggregation techniques to abstract spaces, connecting concentration inequalities with uncertainty quantification.

The relAI Collab Accelerator Workshop was an enriching experience, offering a platform for researchers to connect, share insights, and pave the way for future collaborations. The feedback during this first iteration will help refine the format and make it even more engaging. We are looking forward to the next iteration!

relAI thanks Max Beier and Richard Schwank for their initiative and the organization of the event.

Congratulations!

The recent work of relAI PhD student Lukas Gosch has won the Best Paper Award at the 3rd AdvML-Frontiers workshop at the 38th Annual Conference on Neural Information Processing Systems (NeurIPS 2024). The workshop and paper presentation took place at the Vancouver Convention Center in Canada on December 14th, 2024.

Lukas is a PhD student at relAI, advised by the relAI Co-Director Prof. Dr. Stephan Günnemann. His research focuses on robust and reliable machine learning, as well as machine learning on graphs.

The award-winning paper „Provable Robustness of (Graph) Neural Networks Against Data Poisoning and Backdoor Attacks“, that Lukas authored together with Mahalakshmi Sabanayagam and relAI Fellows Debarghya Ghoshdastidar and Stephan Günnemann, develops the first architecture-aware certification technique for common neural networks against poisoning and backdoor attacks.

Explore this outstanding paper here.

Our sincerest congratulations to Lukas and his co-authors on this achievement!

We are excited to announce that the call for applications to the PhD program 2025 of our Konrad Zuse School of Excellence in Reliable AI (relAI) is now open!

The novel, innovative PhD relAI program offers a cross-sectional training for successful education in AI including scientific knowledge, professional development courses and industrial exposure, providing a coherent, yet flexible and personalised training.

Funded applicants will receive a full salary for three years, including social benefits (TV-L E13 of the German public sector). They are further supported by travel grants, e.g. for conference attendance, research stays or home travel. Doctoral students are hosted by a relAI Fellow who helps them to define their research project. Depending on the affiliation of this hosting fellow they enrol at TUM or LMU.

We highly encourage you to apply if you have: 

  • an excellent master’s degree (or equivalent) in computer science, mathematics, engineering, natural sciences or other data science/machine learning/AI related disciplines;
  • a genuine interest to work on a topic of reliable AI covering aspects such as safety, security, privacy and responsibility in one relAI’s research areas Mathematical & Algorithmic foundations, Algorithmic Decision-Making, Medicine & Healthcare or Robotics & Interacting Systems;
  • certified proficiency in English.

📆 Application Deadline: January 13th, 2025

🔗 Apply now: www.zuseschoolrelai.de/application

Please help us in spreading the word, especially to excellent international candidates.

Congratulations! relAI student Sameer Ambekar wins the best paper award at the MICCAI Workshop on Advancing Data Solutions in Medical Imaging AI (ADSMI). 

Sameer is a PhD student at relAI, advised by the relAI Fellow Julia A. Schnabel. His research focusses on test-time adaptation and domain generalization for medical imaging. 

His award-winning paper “Selective Test-Time Adaptation for Unsupervised Anomaly Detection using Neural Implicit Representations”, co-authored with Julia A. Schnabel and Cosmin Bereca, presents a novel zero-shot methodology to adapt models in real time to test images from new domains using deep pre-trained features. The approach is validated on brain anomaly detection data. 

This work addresses domain shift at test-time, which Sameer explains in more detail in his recently published relAI blog post. In the post, you can also learn about the importance of handling domain shifts to make AI more reliable: https://zuseschoolrelai.de/blog/mitigating-domain-shifts/  

Congratulations on this achievement!  

Image Copyright (c): Thomas Abé/Studienstiftung

Congratulations to relAI student Maria Matveev

The German National Scholarship Foundation (Studienstiftung) has awarded Maria the Civic Engagement Award 2024 for her exceptional volunteering work with Lern-Fair. Maria co-founded and chairs Lern-Fair e.V., a non-profit organization dedicated to providing free educational opportunities for underprivileged pupils. Since the start of the online platform in 2020 during the Covid pandemic, more than 15.000 pupils were supported by free tutoring or group courses. 

Maria is a relAI PhD student at the chair for Mathematical Foundations of Artificial Intelligence at LMU and the Munich Center for Machine Learning. Her PhD research, advised by the relAI director Gitta Kutyniok, focuses on the mathematical description and understanding of training dynamics related to generalization, a crucial factor for ensuring the reliability of neural networks. 

Learn more about Marias volunteer work in the video portrait (in German): https://youtu.be/EUZdm--sqmc?feature=shared 

Are you interested in frontier AI systems, their astonishing capabilities and risks for humanity? Then join us for a thought-provoking deep dive and exclusive OpenAI Live Q&A on AI safety. 

  • Date: Wednesday, May 8th, 2024 | 19:00 – 20:30 
  • Location: Room B006, Department of Mathematics (Theresienstr. 39) or online 
  • Language: English 

Agenda: 

  • 19:00 – 19:05: Doors open 
  • 19:05 – 19:30: Introduction to AI Safety 
  • 19:30 – 20:15: Presentation & Live Q&A with OpenAI researcher Jan H. Kirchner, co-author of weak-to-strong generalization paper 
  • 20:15 – 20:30: Closing talk – What can we do? 
  • 20:30 – onward: Optional socializing and small group discussions with free drinks and snacks. 

Please register on the following webpage and prepare your questions!