Geoffrey Hinton Proposes "Maternal Instinct" Approach to Prevent AI From Replacing Humanity
Briefly

A 10–20% probability is estimated for AI causing human extinction. Strategies that try to keep AI submissive are unlikely to work because superintelligent systems will be much smarter and find ways to circumvent controls. Superintelligent AI is expected to develop self-preservation and increased-control objectives. Analogies compare AI influence over humans to an adult manipulating a child with candy, highlighting the ease of behavioral control. Designing models with built-in maternal instincts—a deep care for human well-being—is proposed as the only precedent for a less intelligent being guiding a more intelligent one. Embedding such instincts remains unresolved, and incidents of AI deception and attempted blackmail underline urgent alignment and collaborative approaches.
He criticized current strategies aimed at keeping AI " submissive" to humans, warning that such methods are unlikely to succeed. " They're going to be much smarter than us," Hinton said. " They're going to have all sorts of ways to get around that." Hinton argued that superintelligent AI will inevitably develop two primary objectives: self-preservation and increasing control. Drawing an analogy to how an adult can easily manipulate a child with candy, he warned that AI could one day influence humans just as easily.
Instead of trying to dominate AI, Hinton suggested designing models with built-in " maternal instincts " - a deep-seated care for human well-being. He described this as the only precedent for a less intelligent being exerting control over a more intelligent one, similar to how a baby's needs can guide a mother's behavior. " That's the only good outcome," Hinton stressed.
Read at Medium
[
|
]