Close

OpenAI’s o3 tops new AI league table for answering scientific questions

The AI tools Gemini and DeepSeek came behind o3 in a league table of responses to scientific questions.Credit: Andrey Rudakov/Bloomberg via Getty

o3, an artificial intelligence (AI) model developed by the creators of ChatGPT, has been ranked the best AI tool for answering science questions in multiple fields, according to a benchmarking platform launched last week.

SciArena, developed by the Allen Institute for Artificial Intelligence (Ai2) in Seattle, Washington, ranked 23 large language models (LLMs) according to their answers to scientific questions. The quality of the answers was voted on by 102 researchers. o3, created by OpenAI in San Francisco, California, was ranked the best at answering questions on natural sciences, health care, engineering, and humanities and social science, after more than 13,000 votes.

DeepSeek-R1, built by DeepSeek in Hangzhou, China, came second on natural-sciences questions and fourth on engineering. Google’s Gemini-2.5-Pro ranked third in natural sciences and fifth in engineering and health care.

Users’ preference for o3 might stem from the model’s tendency to give a lot of detail on the literature it cites and to produce technically nuanced responses, says Arman Cohan, a research scientist at Ai2. But explaining why models’ performance varies is challenging because most are proprietary. Differences in training data and what the model has been optimized for, among other things, could partially explain it, he says.

SciArena is the latest platform developed to evaluate how AI models perform on certain tasks — and one of the first to rank performance on scientific tasks using crowdsourced feedback. “SciArena is a positive effort that motivates a careful evaluation of LLM-blockisted literature tasks,” says Rahul Shome, a robotics and AI researcher at the Australian National University in Canberra.

Randomly selected

To rank the 23 LLMs, SciArena asked researchers to submit scientific questions. They received answers from two randomly selected models, which supported their responses with references drawn from Semantic Scholar, an AI research tool also created by Ai2. Users then voted on whether one model provided the best answer, the two models were comparable or both performed badly.

The platform is now publicly available and lets users ask research questions for free. All users get answers from two models and can vote on their performance, but only votes from verified users who consent to the terms are included in the leaderboard, which the company says will be updated frequently.

The ability to question LLMs on science topics and have confidence in the answers will help researchers to keep up with the latest literature in their field, says AI researcher Jonathan Kummerfeld at the University of Sydney in Australia. “This will help researchers find work they may have otherwise missed.”

Source link

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *