Skip to content

New Research Ranks AI Models by Transparency

Picture of Chris MacKenzie

Chris MacKenzie

ARI research examines seven AI models on 21 transparency metrics

On Friday, Americans for Responsible Innovation (ARI) released a new report, Transparency in Frontier AI: What Leading Labs Are (and Aren’t) Telling Us, ranking seven big-name AI models on measures of transparency. The study explores transparency in four primary areas, including user-facing documentation, risk and safety, technical transparency, and evaluation and impact. The research uses 21 metrics to rank models in each of the four areas.

Out of a total possible 100 points, Llama 3.2 achieved the highest transparency ranking with a total score of 88.9. Among the seven models examined, Grok-2 received the lowest rank with a score of 19.4. Read the full report.

“The biggest takeaway from our transparency research is the total lack of widely-accepted standards around transparency into AI systems,” said report author and ARI Policy Analyst David Robusto. “When companies release information about new AI models, how much they make public, and even where they draw the line between a new model and an update are all ad hoc decisions. In many cases, companies release major updates without any detailed documentation – it’s like we have the blueprints for a house that’s since gotten a renovation, a new garage, and a pool out back, all without updates to the building’s specs. The end result is ambiguity and a lack of transparency for users and policymakers.”

ARI’s research finds that AI models rank lowest on metrics related to technical transparency, with a majority of all models receiving a failing score for each technical transparency metric. Models rank highest on transparency metrics related to user-facing documentation.

###

Americans for Responsible Innovation (ARI) is a nonprofit organization dedicated to policy advocacy in the public interest, focused on emerging technologies like artificial intelligence (AI). Learn more at ARI.us.

Share