Publications of relAI students at ICLR 2025

The excellent work of relAI students will be prominently represented at the Thirteenth International Conference on Learning Representations (ICLR) 2025, which will take place at the Singapore EXPO from 24 to 28 April 28 2025.

Thirteen publications from our students will be presented at the conference, nine of them in the main track. Notably, four out of these nine publications have been selected for Oral or Spotlight presentations. This is a significant achievement and demonstrates the high quality of relAI research, considering that only 15% of accepted papers are invited to give a talk.

If you plan to attend the conference, do not miss the opportunity to discuss these publications directly with some of our students. Be sure to attend the Oral presentation by Yan Scholten titled "A Probabilistic Perspective on Unlearning and Alignment for Large Language Models" on the 24th April. You can learn about Lisa Wimmer´s work, "Trust Me, I Know the Way: Predictive Uncertainty in the Presence of Shortcut Learning" at the Workshop on Spurious Correlation and Shortcut Learning: Foundations and Solutions. Additionally, check out the posters of Amine Ketata and Chengzhi Hu!.  Amine will be presenting his work on “Lift Your Molecules: Molecular Graph Generation in Latent Euclidean Space” and you can talk to Chengzhi Hu about “Surgical, Cheap, and Flexible: Mitigating False Refusal in Language Models via Single Vector Ablation”.

Full list of relAI publications at ICLR 2025:

    Oral Presentation - Main Track

  1. A Probabilistic Perspective on Unlearning and Alignment for Large Language Models
    Yan Scholten, Stephan Günnemann, Leo Schwinn
  2. Spotlight Presentation - Main Track

  3. Exact Certification of (Graph) Neural Networks Against Label Poisoning
    Mahalakshmi Sabanayagam, Lukas Gosch, Stephan Günnemann, Debarghya Ghoshdastidar
  4. Provably Reliable Conformal Prediction Sets in the Presence of Data Poisoning
    Yan Scholten, Stephan Günnemann
  5. Signature Kernel Conditional Independence Tests in Causal Discovery for Stochastic Processes
    Georg Manten, Cecilia Casolo, Emilio Ferrucci, Søren Wengel Mogensen, Cristopher Salvi, Niki Kilbertus
  6. Posters - Main Track

  7. Differentially private learners for heterogeneous treatment effects
    Maresa Schröder, Valentyn Melnychuk, Stefan Feuerriegel
  8. Surgical, Cheap, and Flexible: Mitigating False Refusal in Language Models via Single Vector Ablation
    Xinpeng Wang, Chengzhi Hu, Paul Röttger, Barbara Plank
  9. ParFam -- (Neural Guided) Symbolic Regression via Continuous Global Optimization
    Philipp Scholl, Katharina Bieker, Hillary Hauger, Gitta Kutyniok
  10. Lift Your Molecules: Molecular Graph Generation in Latent Euclidean Space
    Mohamed Amine Ketata, Nicholas Gao, Johanna Sommer, Tom Wollschläger, Stephan Günnemann
  11. Constructing Confidence Intervals for Average Treatment Effects from Multiple Datasets
    Yuxin Wang, Maresa Schröder, Dennis Frauen, Jonas Schweisthal, Konstantin Hess, Stefan Feuerriegel
  12. Workshops

  13. Trust Me, I Know the Way: Predictive Uncertainty in the Presence of Shortcut Learning
    Lisa Wimmer, Bernd Bischl, Ludwig Bothmann
    Workshop on Spurious Correlation and Shortcut Learning: Foundations and Solutions
  14. Privacy Amplification by Structured Subsampling for Deep Differentially Private Time Series Forecasting
    Jan Schuchardt, Mina Dalirrooyfard, Jed Guzelkabaagac, Anderson Schneider, Yuriy Nevmyvaka, Stephan Günnemann
    Workshop on Advances in Financial AI: Opportunities, Innovations and Responsible AI
  15. Cracking the Code: Evaluating Zero-Shot Prompting Methods for Providing Programming Feedback
    Niklas Ippisch, Anna-Carolina Haensch, Markus Herklotz, Jan Simson, Jacob Beck, Malte Schierholz
    Workshop on Human-AI Coevolution
  16. Exact Certification of (Graph) Neural Networks Against Label Poisoning
    Mahalakshmi Sabanayagam, Lukas Gosch, Stephan Günnemann, Debarghya Ghoshdastidar
    VerifAI: AI Verification in the Wild

    Graph Neural Networks for Enhancing Ensemble Forecasts of Extreme Rainfall
    Christopher Bülte, Sohir Maskey, Philipp Scholl, Jonas Berg, Gitta Kutyniok
    Workshop on Tackling Climate Change with Machine Learning

Are you interested in frontier AI systems, their astonishing capabilities and risks for humanity? Then join us for a thought-provoking deep dive and exclusive OpenAI Live Q&A on AI safety. 

  • Date: Wednesday, May 8th, 2024 | 19:00 – 20:30 
  • Location: Room B006, Department of Mathematics (Theresienstr. 39) or online 
  • Language: English 

Agenda: 

  • 19:00 – 19:05: Doors open 
  • 19:05 – 19:30: Introduction to AI Safety 
  • 19:30 – 20:15: Presentation & Live Q&A with OpenAI researcher Jan H. Kirchner, co-author of weak-to-strong generalization paper 
  • 20:15 – 20:30: Closing talk – What can we do? 
  • 20:30 – onward: Optional socializing and small group discussions with free drinks and snacks. 

Please register on the following webpage and prepare your questions! 

We are honored to announce that the Zuse Schools of Excellence in AI have been featured in the AI action plan of the German Federal Ministry of Education and Research ("BMBF-Aktionsplan Künstliche Intelligenz")! 

This strategic plan aims to significantly advance AI research, development, and application in Germany. It emphasizes strengthening Germany's position in the global AI landscape. This is done by enhancing research infrastructure and transfer, facilitating AI integration into various sectors such as healthcare, fostering a dialogue on the societal and ethical implications of AI and ensuring AI competencies at all levels in the long term. Notably, the BMBF has committed over 1.6 billion Euros to this AI action plan.  

Being explicitly named in this influential plan highlights the dedication of the three Zuse Schools including relAI, to fostering ethical, reliable AI innovations and preparing the next generation of AI leaders. 

A huge thank you to our supporters, partners, fellows, and students for making this milestone possible!