Foto von Lisa Thiergart mit Link zu ihrem Portät.

Lisa Thiergart

Machine Intelligence Research Institute (MIRI)

December 2024

Even during my bachelor’s studies, in addition to my technical focuses on AI and machine learning, I developed an interest in philosophical topics such as epistemology, utilitarianism, and artificial consciousness—which ultimately led me to found the Philosophia Munich association. Through the exchange and discussions in the group, the research field of AI and AI safety became interesting to me, which is why I went to the Georgia Institute of Technology for my further studies. Here, the EA group at Georgia Tech contacted me, dealing with the same questions and goals as I was. It was truly inspiring and motivating to discover a community of like-minded thinkers who approach the problem with similar urgency and who had also decided to pursue the AI safety research path. The group encouraged me to apply for the MATS program. The fellowship as well as my mentor significantly accelerated my path in AI safety.

Originally, I wanted to start my own research startup in Brain Computer Interfaces after my studies. Through EA, I reset my focus because I wanted to choose a career path with more impact in AI safety. Through the community, I was also able to meet many other researchers in the field. This ultimately led me to research AI safety at the Machine Intelligence Research Institute (MIRI) in Berkeley.

The sense of community, the seriousness, the careful arguments, and the deep commitment of many EAs around me continue to keep me focused on the main goals, inspire me to overcome obstacles, and motivate me to continue even though it seems we don’t have much time left to turn things around.

Currently, I’m the research lead of MIRI’s technical governance team which I founded at the beginning of the year. My team conducts technical research on AI governance goals. We advise policy makers and serve as technical experts as part of the Plenary for the Codes of Practice of the EU AI Act. Since I do not expect that we will achieve AI safety through technical solutions before artificial general intelligence emerges (AGI), I think technical AI governance is one of the best ways to contribute to a safer development of AGI (and probably one day Artificial Superintelligence, “ASI”).

In the coming year, I will work in the “AI Security” area, which among other things involves protecting critical IP that could become very dangerous in the wrong hands. I will work on various hardware and software proof of concepts for applying FlexHEGs, which enable different security and regulatory approaches.

I am involved in various forms in the field of AI safety; I am a mentor for MATS, take interested scientists into my research team, and am also a co-organizer of a new European sister event to the DEFCON AI Security Forums, which will take place in 2025.

If you are interested in the forum, feel free to reach out to me, whether you are a cybersecurity expert, enthusiast, or completely new to the field.

Subscribe to our monthly English Newsletter

Information about our privacy practices can be found here. We use Mailchimp as our marketing platform. When you sign up for our newsletter, you consent to your data being transmitted to Mailchimp for processing.

A job with impact? We can advise you!

Do you want to make a real difference with your career? Then book a free consultation with us now: