Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

No cookies to display.

Anthropic to Fund Initiative to Develop New Third-Party AI Benchmarks to Assess AI Models

Share

Anthropic announced a new initiative to develop new benchmarks to test the capabilities of advanced artificial intelligence (AI) models on Tuesday. The AI firm will be funding the project and has invited applications from interested entities. The company said that the existing benchmarks are not enough to fully test the capabilities and the impact of the newer large language models (LLMs). As a result, a new set of evaluations focused on AI safety, advanced capabilities, and its societal impact is needed to be developed, stated Anthropic.

Anthropic to fund new benchmarks for AI models

In a newsroom post, Anthropic highlighted the need for a comprehensive third-party evaluation ecosystem to overcome the limited scope of current benchmarks. The AI firm announced that through its initiative, it will fund third-party organisations that want to develop new assessments for AI models focused on quality and high safety standards.

For Anthropic, the high-priority areas include tasks and questions that can measure an LLM’s AI Safety Levels (ASLs), advanced capabilities in generating ideas and responses, as well as the societal impact of these capabilities.

Under the ASL category, the company highlighted several parameters that include the capability of the AI models to assist or act autonomously in running cyberattacks, the potential of the models to assist in the creation of or enhancing the knowledge of creating chemical, biological, radiological and nuclear (CBRN) risks, national security risk assessment, and more.

In terms of advanced capabilities, Anthropic highlighted that the benchmarks should be capable of assessing AI’s potential to transform scientific research, participation and refusal towards harmfulness, and multilingual capabilities. Further, the AI firm said it is necessary to understand the potential of an AI model to impact society. For this, the evaluations should be able to target concepts such as “harmful biases, discrimination, over-reliance, dependence, attachment, psychological influence, economic impacts, homogenization, and other broad societal impacts.”

Apart from this, the AI firm also listed some principles for good evaluations. It said evaluations should not be available in training data used by AI as it often turns into a memorisation test for the models. It also encouraged keeping between 1,000 to 10,000 tasks or questions to test the AI. It also asked organisations to use subject matter experts to create tasks that test performance in a specific domain.


Affiliate links may be automatically generated – see our ethics statement for details.