ML Alignment & Theory Scholars (MATS) Program connects scholars with mentors in AI alignment, governance, and security. Their unique approach combines research with educational seminars and community networking. MATS empowers researchers to address the urgent challenge of unaligned artificial intelligence.
Conduct research on AI alignment challenges; Attend workshops and seminars on AI governance; Network with professionals in AI safety; Collaborate with mentors on research projects; Pursue independent research with funding support.
Supported 357 scholars and 75 mentors since 2021; Received funding from notable organizations like Open Philanthropy; Alumni have co-founded AI safety organizations.