ML4G Germany - AI Alignment Camp

Apply Here 📝



ML for Good is a bootcamp that aims to provide advanced training in deep learning to those who want to work towards making AI safe and beneficial to humanity.

At our rented venue, this camp will fast-track your deep learning skills, inform you about AI safety research, introduce AI's risks and challenges, and connect you with like-minded individuals for potential friendships and collaborations.

The bootcamp will take place from August 24th to September 3rd in Germany. You need to send your application by June 30, 2023 at 11:59 PM GMT+2. Apply Here 📝


How will the days be spent?
  • Peer-coding sessions following a technical curriculum with mentors
  • Presentations by experts in the field
  • Review and discussion of AI Safety literature
  • Personal career advice and mentorship
  • Outdoor and evening activities - and time to rest!


How do the logistics work?
  • The bootcamp is 100% free. There is no fee for room, board, or tuition.
  • We ask participants to pay for their travel costs - however, if this is preventing you from attending, please let us know and we will make sure to cover it.
  • The camp is 10 days long
Where is the venue?
  • The venue is 2.5 hours from Berlin, see the location here


What are the prerequisites?
  • Motivation
  • Desire to improve the world
  • Sufficient reading comprehension level in English
  • Mathematics level equivalent to at least one year of university education in:
    • Linear algebra (matrix operations, eigenvalues, eigenvectors, linear subspace)
    • Analysis (multivariable calculus)
    • Probability (random variables, expected values, conditional distributions, Bayes theorem)
  • Some coding experience, ideally Python - NumPy, basic OOP, math programming
  • Basic deep learning knowledge (feed forward neural network, gradient descent algorithm) - be comfortable with the concepts of videos 1-3 of this 3Blue1Brown playlist

For the last three points - if you are unsure whether you meet these requirements, please apply. We are happy to provide material for you to learn before the camp.

Tentative Curriculum

First part of the camp (7 days)
  • Implement ResNet from scratch in PyTorch, implementing all the layers from scratch and loading weights from a trained model
  • Implement interpretability techniques on the ResNet
  • Implement SGD and other local optimization algorithms, run remote hyper-parameter searches on a simple architecture
  • Implement a simple clone of some of PyTorch, with particular focus on the implementation of back-propagation
  • (Optional) CUDA programming day–write various CUDA kernels, see how close to the performance of PyTorch’s kernels you can get
  • Implement GPT-2 from scratch, implement beam search
  • Fine-tune BERT on classification, fine-tune GPT-2 on some specific corpus
  • Look at various interpretability techniques on GPT-2
  • Data-parallel training
Second part of the camp (3 days)
  • AI safety literature review
  • Projects on topics like:
    • Interpretability of language models
    • Adversarial robustness of neural networks
    • Mathematical frameworks for artificial agents’ behaviors
    • Conceptual research on AI Alignment
    • AI Governance: the semiconductors supply chain



Charbel-Raphael Segerie (LinkedIn) is the co-head of the AI Safety Unit at EffiSciences. He will be the primary instructor for the coding parts of the event and has been the lead curriculum developer for all past iterations of ML4Good. He is teaching technical AI Safety in ENS Paris-Saclay in the Mathematics, Vision and Learning master.

Bogdan-Ionut Cirstea (LinkedIn) is an independent AI safety researcher funded by the Center on Long-Term Risk) currently working mostly at the intersection of neuroscience, deep learning and AI alignment. During the camp, he will be the main instructor in charge of the conceptual parts. He completed a Master’s degree in Applied Mathematics at the École normale supérieure and has a PhD in Machine Learning.

Evander Hammer (LinkedIn) has a Bachelor's in Behavioral Disorders and experience in organizing community events. He is motivated to contribute with his skills to AI Safety and started his first projects in field building. He is also interested in compute governance and wants to strengthen his understanding of technical safety approaches.

Yannick Muehlhaeuser (LinkedIn) is currently studying physics at the University of Tuebingen and has multiple years of experience organizing groups and events. He spent last summer working on Space Governance as a CHERI Fellow, where he also co-authored the research agenda of the Space Futures Initiative.

Nia Gardner studied Computer Science and Economics at university and has spent the past few years working as a software engineer. She is one of the organisers of the EA Manchester group.


Fill in this form 📝

If you can’t make it from August 24th to September 3rd, you can apply to a similar camp in

That’s it! We’ll reach out to you soon!

For any other questions, write to us at: