Artificial intelligence
fromAxios
2 days agoOpenAI releases "Spud" GPT-5.5 model
GPT-5.5 enhances autonomous task handling and efficiency in various fields, marking a significant advancement in AI capabilities.
Next-word pretraining creates statistical pressure toward hallucination, even with idealized error-free data. Facts lacking repeated support in training data yield unavoidable errors, while recurring regularities do not.
Denise Dresser, Chief Revenue Officer at OpenAI, emphasizes the practical applicability. 'Infosys's deep expertise in large-scale software transformation enables enterprises to deploy Codex across areas like legacy code modernization, code review automation, vulnerability detection, and application development.'
For every project that needs guardrails, there's another one where they just get in the way. Some projects demand an LLM that returns the complete, unvarnished truth. For these situations, developers are creating unfettered LLMs that can interact without reservation. Some of these solutions are based on entirely new models while others remove or reduce the guardrails built into popular open source LLMs.
OpenAI's GPT-5.2 Pro does better at solving sophisticated math problems than older versions of the company's top large language model, according to a new study by Epoch AI, a non-profit research institute.