Civic Tech

A new partnership to promote responsible AI

The primary goal of the Frontier Model Forum is to utilize the technical and operational expertise.

Anthropic, Google, Microsoft, and OpenAI have come together to announce the establishment of the Frontier Model Forum, a new industry body dedicated to ensuring the safe and responsible development of frontier AI models.

The primary goal of the Frontier Model Forum is to utilize the technical and operational expertise of its member companies to benefit the entire AI ecosystem.

This includes advancing technical evaluations and benchmarks, as well as creating a public library of solutions to promote industry best practices and standards.

The core objectives of the Forum are as follows:

1. Advancing AI safety research: The Forum aims to promote the responsible development of frontier models by minimizing risks and enabling independent, standardized evaluations of their capabilities and safety.

2. Identifying best practices: The Forum will focus on identifying best practices for the responsible development and deployment of frontier models. This will help the public understand the nature, capabilities, limitations, and impact of the technology.

3. Collaboration and knowledge-sharing: The Forum will collaborate with policymakers, academics, civil society, and companies to share knowledge about trust and safety risks associated with frontier AI models.

4. Solving societal challenges: The Forum will support efforts to develop applications that can address society’s greatest challenges, such as climate change mitigation, early cancer detection, and combating cyber threats.

Membership criteria for the Forum include organizations that develop and deploy frontier models, demonstrate a strong commitment to frontier model safety, and are willing to contribute to the Forum’s efforts through joint initiatives and support.

To achieve its objectives, the Forum will focus on three key areas over the coming year:

1. Identifying best practices: The Forum will promote knowledge sharing and best practices among various stakeholders, with a focus on safety standards and practices to mitigate potential risks.

2. Advancing AI safety research: The Forum will support the AI safety ecosystem by identifying crucial research questions and coordinating research efforts in areas such as adversarial robustness, interpretability, oversight, research access, emergent behaviors, and anomaly detection. There will be an initial emphasis on creating a public library of technical evaluations and benchmarks for frontier AI models.

3. Facilitating information sharing: The Forum will establish secure mechanisms for sharing information among companies, governments, and relevant stakeholders regarding AI safety and risks, following responsible disclosure practices from fields like cybersecurity.

Overall, the Frontier Model Forum seeks to build on existing efforts by governments and industries to ensure the responsible development of AI while harnessing its potential to benefit the world.

  • Press release