For the last three points - if you are unsure whether you meet these requirements, please apply. We are happy to provide material for you to learn before the camp.
- How many people will attend the camp? There will be 25 people, including 5 organizers/staff.
- Will there be any spare time? There will be periods of leisure and rest during the camp.
- What language will the camp be in? All courses, instruction, and resources will be in English.
- What do you mean by AI Safety and ML4G? By “AI Safety” we mean ensuring that AI doesn’t lead to the death or disempowerment of humanity. In a recent open letter signed by many deep learning pioneers, it is stated that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
Avoiding these bad outcomes is a challenge that has to be tackled on a societal level. In this camp, we will focus on technical approaches to building safer AI systems, for example by making their internal processes more interpretable.
- I am not sure my level of technical knowledge is sufficient. Your level of motivation is more important than your current technical skills. Especially when it comes to coding, we think you can learn most of the relevant skills in the week before the camp. It would be very useful if you have a basic understanding of Linear Algebra and some experience coding in Python (or solid experience in other languages). Before the camp begins we will provide some guidance on what would be most helpful to review or learn.
- How much do I need to know about AI Safety to apply? We mainly select participants with a mix of technical ability and motivation. When it comes to theoretical AI Safety topics, we intend to get you all up to speed, and we don’t expect an advanced level of knowledge. You will however get more value out of the event if you have familiarity with AI Safety beforehand. We will provide some reading before the camp for those less familiar.
Charbel-Raphael Segerie (LinkedIn) is the co-head of the AI Safety Unit at EffiSciences. He will be the primary instructor for the coding parts of the event and has been the lead curriculum developer for all past iterations of ML4Good. He is teaching technical AI Safety in ENS Paris-Saclay in the Mathematics, Vision and Learning master.
Bogdan-Ionut Cirstea (LinkedIn) is an independent AI safety researcher funded by the Center on Long-Term Risk) currently working mostly at the intersection of neuroscience, deep learning and AI alignment. During the camp, he will be the main instructor in charge of the conceptual parts. He completed a Master’s degree in Applied Mathematics at the École normale supérieure and has a PhD in Machine Learning.
Evander Hammer (LinkedIn) has a Bachelor's in Behavioral Disorders and experience in organizing community events. He is motivated to contribute with his skills to AI Safety and started his first projects in field building. He is also interested in compute governance and wants to strengthen his understanding of technical safety approaches.
Yannick Muehlhaeuser (LinkedIn) is currently studying physics at the University of Tuebingen and has multiple years of experience organizing groups and events. He spent last summer working on Space Governance as a CHERI Fellow, where he also co-authored the research agenda of the Space Futures Initiative.
Nia Gardner studied Computer Science and Economics at university and has spent the past few years working as a software engineer. She is one of the organisers of the EA Manchester group.
If you can’t make it from August 24th to September 3rd, you can apply to a similar camp in
- France: 31 July to 9 August 2023 - apply until 15 June
- Switzerland: 4 September - 16 September - apply here if applications open
For any other questions, write to us at: email@example.com