
ML4G Germany - AI Alignment Camp
Apply Here 📝
Contact: germany@ml4good.org
Description
At our rented venue, this camp will fast-track your deep learning skills, inform you about AI safety research, introduce AI's risks and challenges, and connect you with like-minded individuals for potential friendships and collaborations.

Activities
- Peer-coding sessions following a technical curriculum with mentors
- Presentations by experts in the field
- Review and discussion of AI Safety literature
- Personal career advice and mentorship
- Outdoor and evening activities - and time to rest!
Logistics
- The bootcamp is 100% free. There is no fee for room, board, or tuition.
- We ask participants to pay for their travel costs - however, if this is preventing you from attending, please let us know and we will make sure to cover it.
- The camp is 10 days long
- The venue is 2.5 hours from Berlin, see the location here
Prerequisites
- Motivation
- Desire to improve the world
- Sufficient reading comprehension level in English
- Mathematics level equivalent to at least one year of university education in:
- Linear algebra (matrix operations, eigenvalues, eigenvectors, linear subspace)
- Analysis (multivariable calculus)
- Probability (random variables, expected values, conditional distributions, Bayes theorem)
- Some coding experience, ideally Python - NumPy, basic OOP, math programming
- Basic deep learning knowledge (feed forward neural network, gradient descent algorithm) - be comfortable with the concepts of videos 1-3 of this 3Blue1Brown playlist
For the last three points - if you are unsure whether you meet these requirements, please apply. We are happy to provide material for you to learn before the camp.
Tentative Curriculum
- Implement ResNet from scratch in PyTorch, implementing all the layers from scratch and loading weights from a trained model
- Implement interpretability techniques on the ResNet
- Implement SGD and other local optimization algorithms, run remote hyper-parameter searches on a simple architecture
- Implement a simple clone of some of PyTorch, with particular focus on the implementation of back-propagation
- (Optional) CUDA programming day–write various CUDA kernels, see how close to the performance of PyTorch’s kernels you can get
- Implement GPT-2 from scratch, implement beam search
- Fine-tune BERT on classification, fine-tune GPT-2 on some specific corpus
- Look at various interpretability techniques on GPT-2
- Data-parallel training
- AI safety literature review
- Projects on topics like:
- Interpretability of language models
- Adversarial robustness of neural networks
- Mathematical frameworks for artificial agents’ behaviors
- Conceptual research on AI Alignment
- AI Governance: the semiconductors supply chain
FAQ
- How many people will attend the camp? There will be 25 people, including 5 organizers/staff.
- Will there be any spare time? There will be periods of leisure and rest during the camp.
- What language will the camp be in? All courses, instruction, and resources will be in English.
- What do you mean by AI Safety and ML4G? By “AI Safety” we mean ensuring that AI doesn’t lead to the death or disempowerment of humanity. In a recent open letter signed by many deep learning pioneers, it is stated that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
Avoiding these bad outcomes is a challenge that has to be tackled on a societal level. In this camp, we will focus on technical approaches to building safer AI systems, for example by making their internal processes more interpretable.
- I am not sure my level of technical knowledge is sufficient. Your level of motivation is more important than your current technical skills. Especially when it comes to coding, we think you can learn most of the relevant skills in the week before the camp. It would be very useful if you have a basic understanding of Linear Algebra and some experience coding in Python (or solid experience in other languages). Before the camp begins we will provide some guidance on what would be most helpful to review or learn.
- How much do I need to know about AI Safety to apply? We mainly select participants with a mix of technical ability and motivation. When it comes to theoretical AI Safety topics, we intend to get you all up to speed, and we don’t expect an advanced level of knowledge. You will however get more value out of the event if you have familiarity with AI Safety beforehand. We will provide some reading before the camp for those less familiar.
Team
Charbel-Raphael Segerie (LinkedIn) is the co-head of the AI Safety Unit at EffiSciences. He will be the primary instructor for the coding parts of the event and has been the lead curriculum developer for all past iterations of ML4Good. He is teaching technical AI Safety in ENS Paris-Saclay in the Mathematics, Vision and Learning master.
Bogdan-Ionut Cirstea (LinkedIn) is an independent AI safety researcher funded by the Center on Long-Term Risk) currently working mostly at the intersection of neuroscience, deep learning and AI alignment. During the camp, he will be the main instructor in charge of the conceptual parts. He completed a Master’s degree in Applied Mathematics at the École normale supérieure and has a PhD in Machine Learning.
Evander Hammer (LinkedIn) has a Bachelor's in Behavioral Disorders and experience in organizing community events. He is motivated to contribute with his skills to AI Safety and started his first projects in field building. He is also interested in compute governance and wants to strengthen his understanding of technical safety approaches.
Yannick Muehlhaeuser (LinkedIn) is currently studying physics at the University of Tuebingen and has multiple years of experience organizing groups and events. He spent last summer working on Space Governance as a CHERI Fellow, where he also co-authored the research agenda of the Space Futures Initiative.
Nia Gardner studied Computer Science and Economics at university and has spent the past few years working as a software engineer. She is one of the organisers of the EA Manchester group.
Application
If you can’t make it from August 24th to September 3rd, you can apply to a similar camp in
- France: 31 July to 9 August 2023 - apply until 15 June
- Switzerland: 4 September - 16 September - apply here if applications open
For any other questions, write to us at: germany@ml4good.org