Evaluating AI systems for Ethics & Safety
Evaluating AI systems for Ethics & Safety
I'm an expert on AI safety evaluation, taking a sociotechnical approach. My goal is to develop and establish methods for early risk assessment and mitigation, to help make AI systems safer and contribute to more accountable and responsible AI innovation.
For my CV, see LinkedIn. I'm also on X/ Twitter. For articles, see my academic publications and the talks on this site.
I'm always interested in new research on sociotechnical AI safety evaluation, potential collaborations or speaking and field-building opportunities. If you'd like to get in touch, please email me at lweidinger[at]deepmind[dot]com.