2026 Call for PhD Applications

We are excited to announce that the call for applications to the PhD program 2026 of our Konrad Zuse School of Excellence in Reliable AI (relAI) is now open!

The novel, innovative PhD relAI program offers a cross-sectional training for successful education in AI including scientific knowledge, professional development courses and industrial exposure, providing a coherent, yet flexible and personalised training.

Funded applicants will receive a full salary for three years, including social benefits (TV-L E13 of the German public sector). They may receive additional support through travel grants for conference attendance, research stays, or home travel. Doctoral students are hosted by a relAI Fellow who helps them to define their research project. Depending on the affiliation of this hosting fellow, they enrol at TUM or LMU.

We highly encourage you to apply if you have: 

  • an excellent master’s degree (or equivalent) in computer science, mathematics, engineering, natural sciences or other data science/machine learning/AI related disciplines;
  • a genuine interest to work on a topic of reliable AI covering aspects such as safety, security, privacy and responsibility in one relAI’s research areas Mathematical & Algorithmic foundations, Algorithmic Decision-Making, Medicine & Healthcare, Robotics & Interacting Systems, or Learning and Education;
  • certified proficiency in English.

📆 Application Deadline: January 13th, 2026

🔗 Apply now: www.zuseschoolrelai.de/application

Please help us in spreading the word, especially to excellent international candidates.

We are happy to announce that Resaro has joined relAI as an Industry Partner! 

Resaro stands for REsponsible - SAfe - RObust. Its mission is to ensure the performance, safety, and security of mission-critical AI systems, which is fundamentally aligned with the four relAI central themes: responsibility, privacy, safety and security. 

Resaro’s Approved Intelligence Platform (AIP) provides modular, scenario-based testing workflows to evaluate mission-critical AI systems in defence, public safety, and critical civil use cases. It delivers a comprehensive, end-to-end testing environment based on a proprietary AI trust ontology with measurable AI Solutions Quality Indicators (ASQI) to test, evaluate, verify and validate on solution or system level with different AI modalities. This evaluation covers various aspects, including quality, performance, safety and security. Recent systems under examination have included anti-money laundering solutions, X-ray imaging anomaly classifiers, deepfake detectors, UAVs, face-in-crowd recognition systems, hypothesis generators for pharmaceutical research, customer service chatbots, and video action recognition solutions, among others.

Additionally, Resaro has developed an innovative approach not only to test for quality but also to describe it in a use-case-specific yet standardized manner. For more information, visit www.resaro.ai/asqi.

Partnership with relAI:

🤝Through this mutually beneficial partnership, relAI Students will gain access to internship and research opportunities, while relAI will expand its network by adding unique skills. Additionally, Resaro will strengthen and broaden its open-source trust community, exchanging knowledge with both academic institutions and industry partners.

 

When we try to help the most vulnerable, and we have limited resources to deploy, should we invest them in building better prediction models, or is it sometimes more effective to simply expand access and help more people, even if the targeting isn´t absolutely perfect?

If you’d like to learn more about it, check out the 👉 podcast!

In this episode of Executive Code, PhD student Unai Fischer Abaigar discusses his paper The Value of Prediction in Identifying the Worst-Off. He explains how governments utilize AI to allocate limited resources—and when it is more effective to enhance predictive models versus simply expanding access to public programs. Using real data from Germany’s employment offices, Unai’s research challenges the assumption that better prediction always means better outcomes in public decision-making.

🎉Congratulations!

We are thrilled to announce that Frauke Kreuter, a relAI Fellow and member of the relAI Steering Committee, has been selected as the recipient of the 2026 Waksberg Award. This prestigious award recognizes her significant impact on survey methodology and her role in training the next generation of researchers.

The Waksberg Award is presented by the American Statistical Association and by Statistics Canada's Survey Methodology journal to honor outstanding contributions to survey statistics and methodology.

As part of this recognition, Frauke Kreuter will deliver the Waksberg Invited Address at the Statistics Canada Symposium in 2026 and will also publish a paper in the December 2026 issue of Survey Methodology.

More Information:

https://www.lmu.de/ai-hub/en/news-events/all-news/news/prof.-dr.-frauke-kreuter-wins-2026-waksberg-award.html

On Tuesday, October 21, 2025, the Women in Data Science (WiDS) Munich Conference gathered a vibrant community of researchers, professionals, and students for a day filled with engaging talks, insightful panels, and meaningful networking opportunities. Hosted by Bayerischer Rundfunk (BR), the event celebrated the power of data science to drive social change and empower women in technology.

In line with relAI's commitment to promoting gender equality and diversity, its members supported the event in various ways. The conference was moderated by relAI PhD student Lisa Schmierer, and the keynote address was delivered by relAI Fellow Enkelejda Kasneci, who spoke about using AI to support socially disadvantaged children. Additionally, relAI PhD student Lisa Wimmer presented a talk on "The Need for Uncertainty in Machine Learning." Frauke Kreuter,  relAI Fellow and Ombudsperson as well as member of the Steering Committee, participated as a panelist in the discussion titled “When Evidence Meets Opposition: Navigating the New Normal.” Finally, relAI Coordinator Andrea Schafferhans shared insights on how to earn a PhD in an academic environment during the networking session.

👉 Visit these links for more information on the event, including the agenda and photos.

MDSI news

Women in Data Science I Munic

On October 23, 2025, relAI, the Munich Data Science Institute (MDSI), the Munich Center for Machine Learning (MCML), and the AI Hub@LMU organized the Munich Career Fair AI & Data Science 2025 at TranslaTUM.

The event brought together eleven industry partners and over 150 students across various educational stages, from bachelor’s to doctoral levels. Each partner presented an overview about their activities in AI and data science, highlighting associated career opportunities. Additionally, students had ample time and space for networking and personal interactions at the industry stands in the foyer of TranslaTUM.

We extend our heartfelt thanks to all participants, especially the industry partners, for showcasing potential career paths in AI and Data Science and contributing to the success of the fair. We look forward to seeing all of you at the next edition.

🎉 Congratulations!

We are excited to announce that a team consisting of relAI PhD students Shuo Chen, Bailan He, and Jingpei Wu, along with relAI Fellow Volker Tresp and members of the Torr Vision Group from the University of Oxford and TU Berlin, received the Honorable Mention Award at OpenAI Red-Teaming Challenge on Kaggle.  They ranked among the top 20 teams (Top 3%) out of 5,911 participants and over 600 teams.

The Red Teaming Challenge, initiated by OpenAI, tasked participants with probing its newly released open-weight model, gpt-oss-20b. The objective was to identify previously undetected vulnerabilities and harmful behaviors, such as lying, deceptive alignment, and reward-hacking exploits.

Would you like to learn more about the awarded work?

The write-up of the hackathon and the accompanying paper, “Bag of Tricks for Subverting Reasoning-Based Safety Guardrails,” detail the findings of the study, revealing systemic vulnerabilities in recent reasoning-based safety guardrails like Deliberative Alignment.

👉 Check them out: https://chenxshuo.github.io/bag-of-tricks/

relAI is excited to announce that GE HealthCare has become an official industry partner.

As one of the leading global providers of MRI, ultrasound, and other medical imaging technologies, GE HealthCare is dedicated to creating a world where medicine and healthcare have no limits. Their mission aligns closely with relAI’s focus on safety and security, aiming to provide high-quality, reliable devices to improve patient care. GE HealthCare is furthermore connected to relAI through PhD students Natascha Niessen and Ha Young Kim, who are both PhD scientists at GE HealthCare.

GE HealthCare is eager to connect with young talents at relAI and support them in their career development and educational journeys. To facilitate this connection, GE HealthCare will host an event, offering relAI students the opportunity to learn more about the work being done at the R&D site in Munich. This event will also allow students to explore potential collaborations and meet professionals who are shaping the future of medical technology.

 

Education is a crucial societal priority and a strategic focus for the application of reliable AI. To address this, relAI has introduced a new research area: Learning & Instruction. This initiative will be led by relAI fellows Prof. Jochen Kuhn from LMU and Prof. Enkeledja Kasneci from TUM, both of whom are experts in educational technology. 

Learning & Instruction focuses on exploring how reliable AI can be used to transform education in meaningful and responsible ways. It investigates the potential of intelligent tutoring systems, adaptive feedback, and digital learning assistants to personalize learning paths and provide targeted support. At the same time, it examines the broader effects of AI on teaching and learning: how AI systems shape learner motivation, teacher roles, and the dynamics of human-AI collaboration. 

By bringing together expertise from artificial intelligence, learning sciences, and educational research, Learning & Instruction aims to develop robust and trustworthy AI applications that not only advance technology, but also serve pedagogical goals and democratic values. .

We are excited to announce that the European Research Council (ERC) has awarded a Starting Grant to relAI Fellow Niki Kilbertus for the project DYNAMICAUS, which focuses on advancing the understanding of cause-and-effect relationships in complex dynamical systems.

Many global challenges, from climate change to healthcare and pandemic preparedness, involve systems where small changes can have far-reaching effects. Understanding how interventions influence outcomes in such complex dynamics requires reliable “if-then” reasoning. Traditional mathematical dynamical models often oversimplify these systems, while purely data-driven machine learning models, though powerful, can be difficult to interpret and may not generalize well to new situations. The DYNAMICAUS project addresses this gap by combining machine learning methods with rigorous mechanistic modeling and methods from causal inference.

DYNAMICAUS aligns closely with the mission of relAI. Its goal is to provide reliable insights that address complex societal challenges in a responsible and impactful way. Additionally, a key application area for these methods is medicine & healthcare, where the aim is to enhance treatment planning by improving the anticipation of patient outcomes.

More Information:

https://www.nat.tum.de/en/nat/latest/article/six-erc-starting-grants-for-researchers-at-tum/

https://www.helmholtz-munich.de/en/newsroom/news-all/artikel/niki-kilbertus-receives-erc-starting-grant-for-causal-analysis-in-complex-systems