relAI MSc Program call open

We are excited to announce that our call for applications to the relAI MSc program is now open! 

The novel, innovative relAI MSc program is an addition to a regular MSc program at Technical University of Munich (TUM) or Ludwig Maximilians University (LMU), offering comprehensive cross-sectional training in reliable AI, including scientific knowledge, professional development courses, and industrial exposure. Funded applicants receive a scholarship of up to 934€ and additional support such as travel grants for home travel.   

relAI, funded by the German Academic Exchange Service (DAAD), is embedded in the unique transdisciplinary Munich AI ecosystem, combining the expertise of the two Universities of Excellence TUM and LMU of Munich.  

We highly encourage you to apply if you:   

  • hold an excellent Bachelor’s degree in computer science, mathematics, engineering, natural sciences or other data science/machine learning/AI related disciplines,  
  • are accepted to a MSc program in said disciplines at either TUM or LMU starting in spring or fall 2024, or have applied there (Acceptance necessary before joining relAI) 
  • have a genuine interest to study reliable AI covering aspects such as safety, security, privacy and responsibility in one relAI’s research areas Mathematical & Algorithmic foundations, Algorithmic Decision-Making, Medicine & Healthcare or Robotics & Interacting Systems, and
  • can certify proficiency in English on C1 or higher level.  

📆 Application Deadline: June 17th, 2024  

🔗 Apply now: 

We are thrilled to share the outcomes of our recent student-driven event organized by Maria Matveev and Julius Hege from the Chair of Mathematical Foundations of Artificial Intelligence (LMU): the first relAI Safety hackathon held last weekend! This dynamic gathering brought together a mix of students and professionals interested in the field of AI Safety. 

Over the course of two intense days, participants delved into practical projects aimed at addressing various aspects of AI safety. Their projects ranged from adversarial prompting on a binary question data set, measuring the robustness of the responses, to a website to compare your own emotional intelligence and bias to large language models such as Llama and ChatGPT. The latter project is publicly available, and you can try it out here:  

The atmosphere at the hackathon was inspiring, with enthusiastic participants exchanging ideas, insights and experiences on how to enhance the reliability and safety of AI. The event provided a great opportunity for attendees not to only work on innovative projects, but also to engage in thought-provoking discussions surrounding the ethical implications and potential risks associated with AI. Looking forward to more engaging events! 

Social media links of the event: X & LinkedIn