relAI’s research mission is to train future generations of AI experts in Germany who combine technical brilliance with awareness of the importance of the reliability of AI. As a central component of the Munich AI ecosystem, relAI is committed to making artificial intelligence more safe, secure, responsible and protective regarding the privacy of individuals. It encourages responsible use to exploit AI’s vast potential for the benefit of humanity, to advance insight into and debate on machine learning including its ethical and societal dimensions, and to strengthen “AI made in Germany”.
Research projects are conducted by professors, fellows, visiting scholars, post-docs, PhD candidates and Master’s students in academic departments or research centers. Research dissemination is not only oriented toward the academic world but also key innovative industries such as medicine and healthcare, robotics, and interactive systems, as well as algorithmic decision-making.
The scientific program of relAI will contribute to the end-to-end development of reliable AI, covering different branches of applied research on the basis of profound mathematical and algorithmic foundations. This theoretical grounding of AI applications is a distinguishing feature of relAI: Our conception of reliability involves the demand for a rigorous formal description of properties as well as provable guarantees, because only such guarantees will create the trust and confidence needed for an unreserved adoption of AI in practice.
The research program combines mathematical and algorithmic foundations of reliable AI along with domain knowledge in three core application domains (as visualised in this figure): medicine & healthcare, robotics & interacting systems, and algorithmic decision-making. For these applications, which are of major importance for Germany, reliable AI methods are most urgently needed. Thus, the school’s research addresses a highly impactful and innovative topic with core societal demands in domains of public interest.
Each of the school’s four research areas (green in figure above) covers central themes of reliable AI (blue in figure above):
- Safety, i.e., ensuring that AI systems (e.g., robots) do not cause any harm or danger.
- Security, i.e., making AI systems resilient against threats, external attacks, and information leakage, e.g., avoiding manipulation of decision-making systems against adversaries.
- Privacy, i.e., ensuring protection and confidentiality of (individual) data and information, such as medical AI systems incorporating sensitive patient data.
- Responsibility, i.e., developing AI systems taking societal norms, ethical principles, and the need of people into consideration, for example by making decisions understandable and protecting individuals against discrimination.
By combining foundational AI research with core applications, we realize a strong interdisciplinarity within relAI.
Mathematical and Algorithmic Foundations
Reliability of AI with all its facets can only be achieved through a profound understanding of its foundations. In fact, the current gap between theory and practice of AI methodologies is one of the key obstacles for deriving comprehensive guarantees as required by critical applications. Supporting our goal of reliable AI, the general research challenges we aim to address are twofold.
Firstly, we aim to establish theoretical guarantees for AI. This includes expressivity of AI models, analysis of learning algorithms, generalization capabilities of trained AI systems, and aspects such as robustness, aiming predominantly at concrete error bounds and certification. A particular challenge are novel and highly complex architectures such as graph neural networks or transformers.
Secondly, to support reliability, we research algorithmic foundations of AI on relevant topics, such as IT security, federated learning, distributed systems, and causal modeling, thereby ensuring a tight link to the application domains and their practical realization.
Medicine and Healthcare
In relAI, we combine the expertise in AI healthcare and medicine to successfully tackle these challenges. AI has the potential to fundamentally transform the future of medicine and healthcare by enabling earlier and more accurate diagnosis and better treatment, leading to improved outcomes for patients and increased efficiency in healthcare. The emergence of AI for medicine and healthcare also offers a number of transformative opportunities for economic growth. Examples cover prevention and early detection, e.g. AI for wearable devices as well as AI for screening (e.g. mammography).
A key requirement for the successful deployment of AI in clinical environments is the development of safe, secure, and trustworthy ML techniques. In particular, advances are required in robust and data efficient learning, privacy preservation, and interpretable deep learning.
Robotics and Interacting Systems
Engineers and computer scientists are currently developing autonomous systems with AI techniques as a core component. This provides endless possibilities but also comes with enormous challenges regarding safety, security, and privacy. For example, how to guarantee safety of an autonomous agent (e.g., a robot in a human environment) under all circumstances, given that a designer cannot foresee all situations the agent will face in the future? How to balance the advantages of AI cloud computing with the increased risk of security violations? How to leverage data to adapt to the needs of a human user while bearing privacy concerns in mind? To answer such questions, relAI will focus on safe, secure, and privacy-preserving AI in the context of autonomous agents and interacting systems.
Ever more applications in AI consider prescriptive modeling in the sense of learning a model that stipulates appropriate decisions or actions to be taken in real-world scenarios: Which medical therapy should be applied? Should this person be hired for the job? Decisions of that kind are increasingly automated and made by algorithms instead of humans, often relying on AI methods. Our ambition is to develop AI-based methodologies for reliable algorithmic decision-making (ADM).
This comes with the need to address specific technical issues such as the lack of an objective “ground truth” underlying every prediction, and learning from partial training information, comprising feedback about the decision made, while lacking information about counterfactuals. Methodological research on ADM will be complemented by more application-oriented research on reliable decisions in business and management.