METR is a nonprofit research organization that studies the capabilities of general-purpose AI systems. Their unique approach involves evaluating AI's autonomous capabilities and its potential risks to society. This research is crucial for understanding the implications of advanced AI technologies and ensuring their safe deployment.
Conducting evaluations of AI models for safety; Measuring productivity impacts of AI tools on developers; Analyzing AI's performance on complex tasks; Researching AI's autonomous capabilities; Developing safety policies for frontier AI.
Collaborated with Anthropic and OpenAI for evaluations; Engaged in partnerships with AI Security Institute; Participates in NIST AI Safety Institute Consortium.