OpenAI releases GPT-5.4-Cyber for vetted security teams, scaling Trusted Access programme
Briefly

OpenAI releases GPT-5.4-Cyber for vetted security teams, scaling Trusted Access programme
"GPT-5.4-Cyber is a variant of GPT-5.4 fine-tuned specifically for defensive security work. Its defining feature is a lower refusal boundary: where standard models block sensitive queries about vulnerability research, exploit analysis, or malware behaviour, this version is designed to answer them, provided the user has been verified as a legitimate security professional."
"The model also introduces binary reverse engineering capabilities, letting analysts examine compiled software for weaknesses without access to source code."
"Trusted Access for Cyber is an identity-and-trust framework that gates access to more capable models behind verification tiers. Individual users can authenticate at chatgpt.com/cyber."
"The April update scales the programme from a limited pilot to what OpenAI describes as 'thousands of verified individual defenders and hundreds of teams responsible for defending'."
OpenAI is releasing GPT-5.4-Cyber, a model tailored for defensive cybersecurity with reduced refusal boundaries and binary reverse engineering capabilities. This model allows verified security professionals to access sensitive queries related to vulnerability research and malware behavior. The launch coincides with the expansion of the Trusted Access for Cyber program, which now includes thousands of vetted defenders. This initiative contrasts with Anthropic's restricted deployment of its Mythos model, highlighting differing philosophies in AI access for cybersecurity.
Read at TNW | Apps
Unable to calculate read time
[
|
]