Redwood Research is a nonprofit organization focused on AI safety and security. They specialize in threat assessment and mitigation for advanced AI systems, addressing risks of misalignment. Their work aims to ensure the safe development and deployment of AI technologies for societal benefit.
Assess risks from misaligned AI systems; Develop protocols for AI control measures; Advise organizations on AI safety practices; Evaluate AI models for strategic deception; Collaborate on AI safety research initiatives.
Registered 501c(3) research nonprofit; Collaborates with Google DeepMind and Anthropic; Published influential research on AI control and alignment faking.