relAI PhD Student Unai Fischer Abaigar talks about his research on a podcast

When we try to help the most vulnerable, and we have limited resources to deploy, should we invest them in building better prediction models, or is it sometimes more effective to simply expand access and help more people, even if the targeting isn´t absolutely perfect?

If you’d like to learn more about it, check out the 👉 podcast!

In this episode of Executive Code, PhD student Unai Fischer Abaigar discusses his paper The Value of Prediction in Identifying the Worst-Off. He explains how governments utilize AI to allocate limited resources—and when it is more effective to enhance predictive models versus simply expanding access to public programs. Using real data from Germany’s employment offices, Unai’s research challenges the assumption that better prediction always means better outcomes in public decision-making.

On October 23, 2025, relAI, the Munich Data Science Institute (MDSI), the Munich Center for Machine Learning (MCML), and the AI Hub@LMU organized the Munich Career Fair AI & Data Science 2025 at TranslaTUM.

The event brought together eleven industry partners and over 150 students across various educational stages, from bachelor’s to doctoral levels. Each partner presented an overview about their activities in AI and data science, highlighting associated career opportunities. Additionally, students had ample time and space for networking and personal interactions at the industry stands in the foyer of TranslaTUM.

We extend our heartfelt thanks to all participants, especially the industry partners, for showcasing potential career paths in AI and Data Science and contributing to the success of the fair. We look forward to seeing all of you at the next edition.

🎉 Congratulations!

We are excited to announce that a team consisting of relAI PhD students Shuo Chen, Bailan He, and Jingpei Wu, along with relAI Fellow Volker Tresp and members of the Torr Vision Group from the University of Oxford and TU Berlin, received the Honorable Mention Award at OpenAI Red-Teaming Challenge on Kaggle.  They ranked among the top 20 teams (Top 3%) out of 5,911 participants and over 600 teams.

The Red Teaming Challenge, initiated by OpenAI, tasked participants with probing its newly released open-weight model, gpt-oss-20b. The objective was to identify previously undetected vulnerabilities and harmful behaviors, such as lying, deceptive alignment, and reward-hacking exploits.

Would you like to learn more about the awarded work?

The write-up of the hackathon and the accompanying paper, “Bag of Tricks for Subverting Reasoning-Based Safety Guardrails,” detail the findings of the study, revealing systemic vulnerabilities in recent reasoning-based safety guardrails like Deliberative Alignment.

👉 Check them out: https://chenxshuo.github.io/bag-of-tricks/

🎉 Congratulations!

We are proud to share that relAI PhD student Yusuf Sale has been honored with one of the IJAR Young Researcher Awards. The prestigious prize, funded by the International Journal of Approximate Reasoning (IJAR), recognizes students who demonstrate excellence in research at an early stage of their scientific careers.  

Yusuf received the award at ISIPTA 25, the 14th International Symposium on Imprecise Probabilities: Theories and Applications, organized by ISIPTA, the leading international forum for theories and applications of imprecise probabilities.   

🎉 Congratulations!

We are thrilled to announce that the paper The Value of Prediction in Identifying the Worst-Off co-authored by relAI PhD student Unai Fischer Abaigar, relAI Fellow Christoph Kern, and Juan Carlos Perdomo, from Harvard University, has been selected for an Outstanding Paper Award at ICML 2025, one of the top-tier conferences in the field of machine learning and artificial intelligence.     

This is an exceptional outcome, considering that only six papers have received this recognition out of more than 12000 submitted this year.

relAI has been instrumental in fostering the collaboration that led to this significant outcome by funding Unai Fisher Abaigar's research stay at Harvard University. Visits to international centres are one of the components of the relAI PhD curriculum, designed to support collaborations with international researchers and gain international research experience on the topic of the reliability of artificial intelligence (AI).  

The paper tackles aspects of the Algorithmic Decision-Making relAI research area and the relAI central theme Responsibility, exploring how predictive models, particularly those using machine learning, can be used in government programs to identify and support the most vulnerable individuals.

On the latest TV episode of “Neuland” by BR - Bayerischer Rundfunk, relAI PhD student Sarah Ball shares her insights about fundamental issues surrounding a central theme of relAI: “responsibility in AI systems.” She addresses topics such as when AI might reinforce discrimination and how to ensure that AI systems align with human values.

Here is a short clip from the conversation and the link to the full video: https://www.ardmediathek.de/video/Y3JpZDovL2JyLmRlL2Jyb2FkY2FzdC9GMjAyNVdPMDA5MzQ2QTA

Last week, our industry partner QuantCo generously hosted over 30 relAI students at its offices in Munich. The visit provided a valuable platform for our students and QuantCo colleagues to connect, fostering a friendly environment for discussions on potential future collaborations.

We extend our sincere gratitude to QuantCo for their warm welcome and for facilitating such a beneficial exchange!

We are excited to announce that the call for applications to the MSc program 2025 of our Konrad Zuse School of Excellence in Reliable AI (relAI) is now open!

The novel, innovative relAI MSc program is an addition to the MSc program at TUM or LMU, offering a cross-sectional training for successful education in AI. It provides a coherent, yet flexible and personalized training by enhancing scientific knowledge, professional development courses and industrial exposure.  

Funded applicants will receive a scholarship of up to 992 EUR (depending on independent income). They are further supported by travel grants, e.g., for home travel.  

We highly encourage you to apply if you have: 

  • an excellent bachelor’s degree in computer science, mathematics, engineering, natural sciences or other data science/machine learning/AI related disciplines;
  • a genuine interest in working on a topic of reliable AI covering aspects such as safety, security, privacy and responsibility in one relAI’s research areas Mathematical & Algorithmic foundations, Algorithmic Decision-Making, Medicine & Healthcare or Robotics & Interacting Systems;
  • certified proficiency in English.

📆 Application Deadline: 17 June 2025 (23:59 AOE)

🔗 Apply now: https://zuseschoolrelai.de/application/#MSc-Program-Application

Please help us in spreading the word, especially to excellent international candidates.

The excellent work of relAI students will be prominently represented at the Thirteenth International Conference on Learning Representations (ICLR) 2025, which will take place at the Singapore EXPO from 24 to 28 April 28 2025.

Thirteen publications from our students will be presented at the conference, nine of them in the main track. Notably, four out of these nine publications have been selected for Oral or Spotlight presentations. This is a significant achievement and demonstrates the high quality of relAI research, considering that only 15% of accepted papers are invited to give a talk.

If you plan to attend the conference, do not miss the opportunity to discuss these publications directly with some of our students. Be sure to attend the Oral presentation by Yan Scholten titled "A Probabilistic Perspective on Unlearning and Alignment for Large Language Models" on the 24th April. You can learn about Lisa Wimmer´s work, "Trust Me, I Know the Way: Predictive Uncertainty in the Presence of Shortcut Learning" at the Workshop on Spurious Correlation and Shortcut Learning: Foundations and Solutions. Additionally, check out the posters of Amine Ketata and Chengzhi Hu!.  Amine will be presenting his work on “Lift Your Molecules: Molecular Graph Generation in Latent Euclidean Space” and you can talk to Chengzhi Hu about “Surgical, Cheap, and Flexible: Mitigating False Refusal in Language Models via Single Vector Ablation”.

Full list of relAI publications at ICLR 2025:

    Oral Presentation - Main Track

  1. A Probabilistic Perspective on Unlearning and Alignment for Large Language Models
    Yan Scholten, Stephan Günnemann, Leo Schwinn
  2. Spotlight Presentation - Main Track

  3. Exact Certification of (Graph) Neural Networks Against Label Poisoning
    Mahalakshmi Sabanayagam, Lukas Gosch, Stephan Günnemann, Debarghya Ghoshdastidar
  4. Provably Reliable Conformal Prediction Sets in the Presence of Data Poisoning
    Yan Scholten, Stephan Günnemann
  5. Signature Kernel Conditional Independence Tests in Causal Discovery for Stochastic Processes
    Georg Manten, Cecilia Casolo, Emilio Ferrucci, Søren Wengel Mogensen, Cristopher Salvi, Niki Kilbertus
  6. Posters - Main Track

  7. Differentially private learners for heterogeneous treatment effects
    Maresa Schröder, Valentyn Melnychuk, Stefan Feuerriegel
  8. Surgical, Cheap, and Flexible: Mitigating False Refusal in Language Models via Single Vector Ablation
    Xinpeng Wang, Chengzhi Hu, Paul Röttger, Barbara Plank
  9. ParFam -- (Neural Guided) Symbolic Regression via Continuous Global Optimization
    Philipp Scholl, Katharina Bieker, Hillary Hauger, Gitta Kutyniok
  10. Lift Your Molecules: Molecular Graph Generation in Latent Euclidean Space
    Mohamed Amine Ketata, Nicholas Gao, Johanna Sommer, Tom Wollschläger, Stephan Günnemann
  11. Constructing Confidence Intervals for Average Treatment Effects from Multiple Datasets
    Yuxin Wang, Maresa Schröder, Dennis Frauen, Jonas Schweisthal, Konstantin Hess, Stefan Feuerriegel
  12. Workshops

  13. Trust Me, I Know the Way: Predictive Uncertainty in the Presence of Shortcut Learning
    Lisa Wimmer, Bernd Bischl, Ludwig Bothmann
    Workshop on Spurious Correlation and Shortcut Learning: Foundations and Solutions
  14. Privacy Amplification by Structured Subsampling for Deep Differentially Private Time Series Forecasting
    Jan Schuchardt, Mina Dalirrooyfard, Jed Guzelkabaagac, Anderson Schneider, Yuriy Nevmyvaka, Stephan Günnemann
    Workshop on Advances in Financial AI: Opportunities, Innovations and Responsible AI
  15. Cracking the Code: Evaluating Zero-Shot Prompting Methods for Providing Programming Feedback
    Niklas Ippisch, Anna-Carolina Haensch, Markus Herklotz, Jan Simson, Jacob Beck, Malte Schierholz
    Workshop on Human-AI Coevolution
  16. Exact Certification of (Graph) Neural Networks Against Label Poisoning
    Mahalakshmi Sabanayagam, Lukas Gosch, Stephan Günnemann, Debarghya Ghoshdastidar
    VerifAI: AI Verification in the Wild

    Graph Neural Networks for Enhancing Ensemble Forecasts of Extreme Rainfall
    Christopher Bülte, Sohir Maskey, Philipp Scholl, Jonas Berg, Gitta Kutyniok
    Workshop on Tackling Climate Change with Machine Learning

🎉The 8th edition of DataFest Germany will be held at Ludwig-Maximilians-Universität in Munich from 28 March to 30 March 2025. relAI is proud to support the event organization again this year. Additionally, a team of relAI students will participate in this exciting competition and networking opportunity.

The event is an annual data-driven competition, commonly referred to as a “hackathon,” that alternates between Mannheim and Munich. It is organized in collaboration with partners from industry and research institutions.

Datafest Germany is a celebration that follows upon the model DataFest™, organized by the American Statistical Association. The world's first DataFest took place at the University of California in 2011. Since then, many universities took up the DataFest format.

Learn more about DataFest Germany in this link.