
PhD
Chair of Artificial Intelligence and Machine Learning at LMU
Institute of Informatics
Akademiestr. 7
80799 München
Biosketch
Asma holds a B.S in Mathematics degree from University of Massachusetts Boston. After finishing her degree she worked in IT health care industry for about 3 years; her experience was mainly pivoted around data analytics to enhance the overall well-being of a defined group of individuals. Asma pursued M.S in Machine Learning from Mohammed Bin Zayed University of Artificial Intelligence to further the passion in the field; her master’s thesis was revolving around learning from noisy labels and uncertainty. She is currently a PhD student in LMU under the supervision of Eyke Hüllermeier to gain a deeper understanding in the domain.
relAI Research
Trustworthy ML: Explainability meets Uncertainty Quantification
Trustworthy Machine Learning combines explainability and uncertainty quantification (UQ) to create models that are both transparent and reliable. Explainability helps interpret model decisions, while UQ assesses confidence by distinguishing between aleatoric (data-related) and epistemic (model-related) uncertainty. This integration is vital for critical fields like healthcare (diagnosis), autonomous systems (safety), and finance (risk assessment), where incorrect or overconfident predictions can have severe consequences. By understanding what a model predicts, why, and with what level of confidence, stakeholders can make informed decisions and build trust. Recent advances, such as probabilistic explanations, uncertainty-aware feature attributions, and set-based predictors, are driving more interpretable and reliable AI systems.