Using topological features to prevent topological errors
Outline: image segmentation is a prominent application of deep learning, but conventionally trained segmentation networks tend to make topological errors. Topological loss functions address this problem. But what does topological correctness mean? This post explains the basics of persistent homology, and how it can be used for machine learning. Full post
What even is differential privacy?
Outline: a concise introduction to differential privacy which offers provably privacy guarantees for training machine learning models. Full post
Mitigating Domain shifts
Outline: adapting a deep neural network to unseen data and tasks is imperative these days, however access to target data is often available. Common target adaptation techniques including domain adaptation and generalization train for meaningful representations during source training. Recent paradigms such as Test-time training/adaptation focus on optimizing the source model on unseen data. To do so, they finetune the model on the streaming unsupervised data which is useful for practical scenarios. Moreover, these techniques can be applied to variety of tasks such as regression, classification and segmentation. Full post
A gentle introduction to uncertainty quantification
Outline: uncertainty Quantification (UQ) is considered indispensable for predictive models in safety-critical applications. Modern models, though high-performing, struggle with providing meaningful uncertainty estimates due to a number of reasons. Full post
Welcome to the relAI Blog
Outline: welcome to the relAI blog of the Konrad Zuse School of Excellence in Reliable AI (relAI). This blog will serve as a platform to share cutting-edge research and developments from our school, highlighting the significant strides we are making towards making AI systems safer, more trustworthy, and privacy-preserving. Full post