Falk Schwendicke has been awarded the 2026 IADR Distinguished Scientist Award in Cariology

🎉 relAI is proud to announce that the International Association for Dental, Oral, and Craniofacial Research (IADR) has named relAI Fellow Falk Schwendicke as the recipient of the 2026 IADR Distinguished Scientist William H. Bowen Research in Dental Caries Award 🦷 .

IADR is a nonprofit organization dedicated to advancing dental, oral, and craniofacial research for global health and well-being. This esteemed IADR award recognizes exceptional and innovative contributions to our understanding of caries etiology and the prevention of dental caries. It is one of the 17 IADR Distinguished Scientist Awards and is considered one of the highest honors bestowed by the organization.

Falk Schwendicke’s Research Achievements

His early work focused on minimally invasive and evidence-based caries management, particularly regarding selective carious tissue removal and its economic evaluation. This research has laid the groundwork for contemporary treatment guidelines. His recent studies have increasingly emphasized the integration of emerging technologies to overcome challenges in caries detection and management. Notably, he has been a pioneer in employing advanced artificial intelligence (AI) applications for radiographic analysis, diagnostic support, and predictive modeling.

A significant achievement in his career was leading a randomized controlled trial that evaluated AI-assisted caries detection. This study set new standards for clinical research in the field and informed subsequent cost-effectiveness analyses. Schwendicke also participates in numerous editorial and review roles and has presented at the IADR General Session and various scientific meetings. He has authored over 500 peer-reviewed publications and 30 book chapters and is ranked among the top 1% of most-cited dental researchers worldwide, according to the Stanford global ranking.

👉 Information sources

https://www.iadr.org/about/news-reports/press-releases/falk-schwendicke-named-recipient-2026-iadr-distinguished

https://www.linkedin.com/posts/prof-dr-falk-schwendicke-9bb6271a1_iadr2026-activity-7444709023079350272-kiFG/?utm_source=share&utm_medium=member_ios&rcm=ACoAAAMg4egBnT-dMw4VyJR7tdTe0Z-9xhGUZZI

🎉 Congratulations to the relAI PhD student Johanna Topalis and relAI Fellow Prof. Michael Ingrisch!

🏆 The article they co-authored, “ChatGPT makes medicine easy to swallow: an exploratory case study on simplified radiology reports, has been awarded the Most Cited Article in European Radiology (Impact Factor 2024) by the European Society of Radiology! The work was presented at the European Congress of Radiology (ECR) 2026 in Vienna and honoured by the Editor-in-Chief of European Radiology, Prof. Bernd Hamm.

📖 The article presents the first exploratory case study evaluating the quality of simplified radiology reports generated by the large language model (LLM) ChatGPT. Radiologists rated the reports as generally high quality but also identified errors that could lead to harmful patient interpretations. The findings highlight both the potential and the limitations of early large language models in clinical communication: while simplified reports can enhance accessibility, medical expert supervision and domain-specific adaptation are vital to ensure patient safety.

💡 The study, first published as a preprint in December 2022, was among the earliest scientific assessments of ChatGPT's ability to simplify radiology reports for patients. Since then, a rapidly growing body of research has explored the role of large language models in medical text simplification.

👉 Publication: https://link.springer.com/article/10.1007/s00330-023-10213-1

      Preprint: https://arxiv.org/abs/2212.14882


On March 10, relAI students had the privilege of hosting Dr. Sebastian Hallensleben, Chief Trust Officer at the relAI Industry Partner Resaro, as an invited speaker at the relAI student seminar. This seminar serves as an important platform that fosters valuable research exchanges and networking opportunities for our students.

Dr. Hallensleben is an expert at the intersection of AI research, regulation, and industry. He plays a significant role in developing AI standards for Europe as the Chair of CEN-CENELEC JTC 21, where European AI standards are being crafted to support EU regulations. Additionally, he co-chairs the AI risk and accountability initiatives at the Organisation for Economic Co-operation and Development (OECD).

About the Talk

In his talk, he shared valuable insights on the landscape of international AI standards and their development. The first half of the session focused on the EU AI Act, detailing how the currently developing landscape of harmonised standards will provide the technical basis for legal compliance. Moving from regulation to practice, he was joined by Linus Stach to demonstrate how Resaro interacts with this landscape in the development of their AI evaluation platform, navigating the complexity of accurately communicating technical metrics to a wide audience of stakeholders. The speakers then demonstrated their evaluation framework using a case study based on public crime statistics from Baden-Württemberg. They showed how the framework can be used to assess model performance dimensions (such as privacy, consistency and correctness), compare different models, and ensure compliant application. The seminar concluded with an extensive and detailed discussion on the practical challenges of defining and achieving AI reliability in real-world scenarios.

More about the Speaker

Sebastian is the initiator and Programme Chair of the Digital Trust Convention and is Principal Advisor Digital Trust at KI Park. As Chief Trust Officer at Resaro, he works towards drilling down to ground truths about capabilities of AI systems. - Previously, Sebastian Hallensleben headed Digitalisation and Artificial Intelligence at VDE Association for Electrical, Electronic and Information Technologies. He focuses in particular on operationalising AI ethics, on characterizing AI quality and on building privacy-preserving trust infrastructures for a more resilient digital space.

👏 Congratulations to relAI PhD Student Jan Simson and relAI Fellow Prof. Christoph Kern!

We are excited to announce that their article Preventing Harmful Data Practices by using Participatory Input to Navigate the Machine Learning Multiverse has been awarded the DGOF Best Paper Award 2026!  This honour, which includes a prize of 500 euros,  was presented at the annual GOR conference in Cologne on February 26, 2026.

The German Society for Online Research (DGOF) annually recognizes outstanding scientific contributions to the advancement of the methods of online research through the DGOF Best Paper Award.

Their award-winning paper emphasizes the importance of transparency and accessibility in key decisions throughout the machine learning pipeline for the general public. It introduces a participatory approach to help navigate the multiverse of design choices, advocating for the democratization of essential decisions rather than simply focusing on optimization.

📖 To the article:

  • Simson, J., et al. (2025). Preventing Harmful Data Practices by using Participatory Input to Navigate the Machine Learning Multiverse. Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems, 806, 1-30. https://doi.org/10.1145/3706598.3713482

👉More information on the following links:

On January 20, 2026, researchers from relAI, MDSI and our industry partner SAP gathered for a pitchtalk session at the SAP Labs Munich Campus in Garching. In an interactive setup, participants from the three organizations exchanged research insights and explored ideas for future collaborations.

The afternoon opened with short welcome words from representatives of SAP (Dr. Tobias Müller), MDSI (Sylvia Kortüm) and relAI (Dr. Mónica Campillos). The following pitchtalks covered a wide range of topics relevant to AI and Data Science, spanning methodological approaches to practical research resources.

Pitchtalk Session

The talks introduced methodologies and applications across various research fields, from uncertainty modeling to personalized medicine and research data infrastructure.

Max Beier, a relAI PhD student, discussed the advantage of using linear-operator methods in dynamical systems. By focusing on linear operators, his approach supports optimal control, scalable optimization algorithms, and reliable forecasting across time scales ranging from milliseconds to days. He showed that this methodology allows efficient learning of system behavior, modeling of trajectory distributions, and generation of coherent sequences, addressing current limitations in decision‑making models for complex dynamical environments.

Parastoo Pashmchi, an industrial PhD student at SAP, introduced an efficient algorithm to handle missing data, a common challenge in AI projects. She presented an ML‑based imputation method that preserves the original data distribution by sampling from the conditional distribution of nearest neighbors, overcoming the limitations of common techniques like kNNImputer. By enabling uncertainty quantification and multiple imputations, her approach improves the reliability of predictive models such as SAP’s green energy forecasting use case, where missing solar production data can significantly undermine model accuracy.

Mario Picciani, an MDSI PhD student,  highlighted recent developments and applications of ProteomicsDB, a proteomics resource established in 2012 through a collaboration between MDSI Core Member Prof. Bernhard Küster and SAP. ProteomicsDB is a powerful multi‑omics, multi‑organism platform that enables real‑time exploration of proteomic, transcriptomic, and drug‑interaction data across the tree of life, supporting research from basic biology to large‑scale initiatives. With expanding capabilities for analyzing drug mechanisms, predicting cell responses, and supporting precision oncology, the SAP HANA–powered resource could become a central tool for biomarker discovery, systems biology, and personalized medicine.

Sebastian Gallenmüller, an MDSI PhD student, presented SLICES-DE, a digital research infrastructure for computing and communication, embedded within a European collaborative network. As a national digital research infrastructure for ICT, SLICES‑DE offers remote‑accessible testbeds, reproducible workflows, and long‑term data management to support research in areas such as 6G, AI, cybersecurity, and cloud‑edge systems. Built as a community‑driven, flexible, and scalable platform, it enables shared experiments, training, and industry collaboration, providing both academia and companies with a versatile environment that can even be booked for individual lectures or large‑scale projects.

Networking over pretzels and lemonade

An informal networking session after the pitchtalks gave speakers and participants from relAI, MDSI, and SAP the opportunity to connect, exchange impressions, discuss research interests, and explore potential collaborations.

We thank our industry partner SAP for hosting this event and sharing insights into ongoing research projects – we look forward to future editions!

👉 Photo Gallery

We are excited to announce that the call for applications to the PhD program 2026 of our Konrad Zuse School of Excellence in Reliable AI (relAI) is now open!

The novel, innovative PhD relAI program offers a cross-sectional training for successful education in AI including scientific knowledge, professional development courses and industrial exposure, providing a coherent, yet flexible and personalised training.

Funded applicants will receive a full salary for three years, including social benefits (TV-L E13 of the German public sector). They may receive additional support through travel grants for conference attendance, research stays, or home travel. Doctoral students are hosted by a relAI Fellow who helps them to define their research project. Depending on the affiliation of this hosting fellow, they enrol at TUM or LMU.

We highly encourage you to apply if you have: 

  • an excellent master’s degree (or equivalent) in computer science, mathematics, engineering, natural sciences or other data science/machine learning/AI related disciplines;
  • a genuine interest to work on a topic of reliable AI covering aspects such as safety, security, privacy and responsibility in one relAI’s research areas Mathematical & Algorithmic foundations, Algorithmic Decision-Making, Medicine & Healthcare, Robotics & Interacting Systems, or Learning and Education;
  • certified proficiency in English.

📆 Application Deadline: January 13th, 2026

🔗 Apply now: www.zuseschoolrelai.de/application

Please help us in spreading the word, especially to excellent international candidates.

When we try to help the most vulnerable, and we have limited resources to deploy, should we invest them in building better prediction models, or is it sometimes more effective to simply expand access and help more people, even if the targeting isn´t absolutely perfect?

If you’d like to learn more about it, check out the 👉 podcast!

In this episode of Executive Code, PhD student Unai Fischer Abaigar discusses his paper The Value of Prediction in Identifying the Worst-Off. He explains how governments utilize AI to allocate limited resources—and when it is more effective to enhance predictive models versus simply expanding access to public programs. Using real data from Germany’s employment offices, Unai’s research challenges the assumption that better prediction always means better outcomes in public decision-making.

On October 23, 2025, relAI, the Munich Data Science Institute (MDSI), the Munich Center for Machine Learning (MCML), and the AI Hub@LMU organized the Munich Career Fair AI & Data Science 2025 at TranslaTUM.

The event brought together eleven industry partners and over 150 students across various educational stages, from bachelor’s to doctoral levels. Each partner presented an overview about their activities in AI and data science, highlighting associated career opportunities. Additionally, students had ample time and space for networking and personal interactions at the industry stands in the foyer of TranslaTUM.

We extend our heartfelt thanks to all participants, especially the industry partners, for showcasing potential career paths in AI and Data Science and contributing to the success of the fair. We look forward to seeing all of you at the next edition.

🎉 Congratulations!

We are excited to announce that a team consisting of relAI PhD students Shuo Chen, Bailan He, and Jingpei Wu, along with relAI Fellow Volker Tresp and members of the Torr Vision Group from the University of Oxford and TU Berlin, received the Honorable Mention Award at OpenAI Red-Teaming Challenge on Kaggle.  They ranked among the top 20 teams (Top 3%) out of 5,911 participants and over 600 teams.

The Red Teaming Challenge, initiated by OpenAI, tasked participants with probing its newly released open-weight model, gpt-oss-20b. The objective was to identify previously undetected vulnerabilities and harmful behaviors, such as lying, deceptive alignment, and reward-hacking exploits.

Would you like to learn more about the awarded work?

The write-up of the hackathon and the accompanying paper, “Bag of Tricks for Subverting Reasoning-Based Safety Guardrails,” detail the findings of the study, revealing systemic vulnerabilities in recent reasoning-based safety guardrails like Deliberative Alignment.

👉 Check them out: https://chenxshuo.github.io/bag-of-tricks/