DeepMind is holding back release of AI research to give Google an edge
Briefly

Google's recent clampdown on research publications has raised concerns among its employees, particularly regarding a blocked paper on vulnerabilities in OpenAI's ChatGPT. While DeepMind sources assert they follow a responsible disclosure policy, internal staff are unsettled by the new review processes, which are seen as jeopardizing their careers. The prioritization of the Gemini project reflects a shift towards commercial interests. Despite impressive AI products and a fluctuating share price, employees worry that the culture is shifting away from academic exploration towards strict corporate productivity, as articulated by one worker's comment about the company's priorities.
However, the employee added it had also blocked a paper that revealed vulnerabilities in OpenAI's ChatGPT, over concerns the release seemed like a hostile tit-for-tat.
In the past few years, Google has produced a range of AI-powered products that have impressed the markets. This includes improving its AI-generated summaries that appear above search results.
"If you can't publish, it's a career killer if you're a researcher," said a former researcher.
"Anything that gets in the way of that he will remove," said one current employee. "He tells people this is a company, not a university campus; if you want to work at a place like that, then leave."
Read at Ars Technica
[
|
]