OpenAI pledges to publish AI safety test results more often | TechCrunch
Briefly

OpenAI has launched the Safety Evaluations Hub, a platform for sharing detailed results of its AI model safety evaluations. This initiative aims to improve transparency regarding harmful content generation, jailbreaks, and hallucinations by providing metrics that will be updated periodically. OpenAI intends to keep this hub current with major model updates while responding to criticisms about rushing safety tests and transparency. The hub reflects OpenAI’s commitment to evolving the scientific evaluation of AI while supporting community efforts for better transparency within the field.
OpenAI aims to share ongoing safety evaluations of its AI models through a newly launched Safety Evaluations Hub, enhancing transparency and community engagement.
The Safety Evaluations Hub will provide regular updates on harmful content generation, jailbreaks, and hallucinations, improving understanding of OpenAI systems' safety.
Read at TechCrunch
[
|
]