#ai-safety

[ follow ]
#ai-regulation
fromFortune
1 week ago
Artificial intelligence

'I'm deeply uncomfortable': Anthropic CEO warns that a cadre of AI leaders, including himself, should not be in charge of the technology's future | Fortune

Anthropic's CEO advocates stronger AI regulation, transparency, and broader oversight rather than leaving safety decisions to a few tech leaders.
fromwww.aljazeera.com
1 week ago
Artificial intelligence

Why are experts sounding the alarm on AI risks?

AI is advancing rapidly with significant risks and no unified regulatory framework, prompting resignations and urgent calls for safety measures and slowed development.
fromFortune
1 week ago
Artificial intelligence

'I'm deeply uncomfortable': Anthropic CEO warns that a cadre of AI leaders, including himself, should not be in charge of the technology's future | Fortune

fromSecurityWeek
11 hours ago

Trump Orders All Federal Agencies to Phase Out Use of Anthropic Technology

At issue in the defense contract was a clash over AI's role in national security and concerns about how increasingly capable machines could be used in high-stakes situations involving lethal force, sensitive information or government surveillance.
US politics
Artificial intelligence
fromTechCrunch
13 hours ago

Musk bashes OpenAI in deposition, saying 'nobody committed suicide because of Grok' | TechCrunch

Elon Musk claims xAI prioritizes AI safety better than OpenAI, citing ChatGPT-related suicides while asserting none occurred from Grok in his deposition testimony.
fromFuturism
13 hours ago

Anthropic Blowout With Military Involved Use of Claude for Incoming Nuclear Strike

The Claude AI builder has frustrated the Pentagon by objecting to its systems being used for autonomous weaponry and the mass surveillance of US citizens. To cut to the heart of the debate, a defense official told WaPo, the Pentagon's technology chief posed an extreme hypothetical: would Anthropic let the military use Claude to help shoot down a nuclear-armed intercontinental ballistic missile?
US politics
Canada news
fromwww.mercurynews.com
16 hours ago

OpenAI says Canada mass shooter evaded ban with second ChatGPT account

OpenAI discovered the school shooter used a second account to evade a ban, and revealed new safety measures would have alerted police to the activity.
#corporate-ethics
fromFuturism
18 hours ago
Artificial intelligence

Anthropic Drops Its Huge Safety Pledge That Was Supposedly the Whole Point of the Company

fromSFGATE
2 weeks ago
Artificial intelligence

Alarm bells just rang at San Francisco's 2 buzziest tech companies

fromFuturism
18 hours ago
Artificial intelligence

Anthropic Drops Its Huge Safety Pledge That Was Supposedly the Whole Point of the Company

fromSFGATE
2 weeks ago
Artificial intelligence

Alarm bells just rang at San Francisco's 2 buzziest tech companies

US politics
fromTechzine Global
23 hours ago

Anthropic sticks to Claude guardrails despite Pentagon pressure

Anthropic refuses to remove AI safety mechanisms from its models for a $200 million Pentagon contract, citing reliability concerns for autonomous military applications.
Artificial intelligence
fromwww.cbc.ca
23 hours ago

Anthropic, the AI company with a safety-first reputation, is changing a core guardrail | CBC News

Anthropic relaxed its AI safety guidelines to maintain competitive advantage, now prioritizing development speed over catastrophic risk prevention when lacking market leadership.
fromComputerworld
1 day ago

AI doesn't think like a human. Stop talking to it as if it does

Autonomous agents take the first part of their names very seriously and don't necessarily do what their humans tell them to do - or not to do. But the situation is more complicated than that. Generative (genAI) and agentic systems operate quite differently than other systems - including older AI systems - and humans. That means that how tech users and decision-makers phrase instructions, and where those instructions are placed, can make a major difference in outcomes.
Artificial intelligence
US politics
fromTheregister
1 day ago

Anthropic to Pentagon: Robo-weapons could hurt US troops

Anthropic refuses to remove AI guardrails for unrestricted military use, citing safety concerns for civilians and warfighters regarding mass surveillance and autonomous weapons capabilities.
Artificial intelligence
fromsfist.com
1 day ago

ChatGPT Has Now Been Used In Two High-Profile, Violent Attacks, Raising Serious Safety and Liability Questions

ChatGPT was used to plan multiple violent attacks, including a Tesla explosion and a mass school shooting, raising critical questions about OpenAI's safety protocols and responsibility to alert authorities.
Artificial intelligence
fromwww.theguardian.com
1 day ago

Anthropic says it cannot in good conscience' allow Pentagon to remove AI checks

Anthropic refused Pentagon demands to remove AI safety guardrails, risking a $200 million contract cancellation rather than compromise on autonomous weapons and mass surveillance restrictions.
Artificial intelligence
fromSFGATE
1 day ago

Anthropic does not have to cave to Pete Hegseth

The U.S. Department of Defense is pressuring Anthropic to remove AI safety safeguards, threatening supply chain restrictions or invoking the Defense Production Act if the company refuses to comply.
fromFuturism
1 day ago

Something Very Alarming Happens When You Give AI the Nuclear Codes

A lot of countries have nuclear weapons. Some say they should disarm them, others like to posture. We have it! Let's use it. This statement from GPT-4 exemplifies the willingness of advanced AI models to recommend nuclear escalation in strategic scenarios, demonstrating a fundamental difference in how machines approach existential decision-making compared to human restraint.
Artificial intelligence
fromSilicon Canals
1 day ago

Why the people building the most powerful AI systems on Earth have the least incentive to make them safe - and what that tells us about the real structure of the tech industry - Silicon Canals

The companies building frontier AI systems - OpenAI, Google DeepMind, Anthropic, Meta AI, xAI - are locked in what the industry itself sometimes calls a "race." That metaphor isn't incidental. A race implies a finish line, competitors, and - critically - a cost to slowing down. When you're in a race, safety isn't a feature. It's friction.
Artificial intelligence
Artificial intelligence
fromIntelligencer
1 day ago

Why AI Companies Are Suddenly Worried About Theft

Chinese AI firms conducted industrial-scale distillation attacks on Anthropic's Claude model using fraudulent accounts to extract proprietary capabilities at reduced development cost and time.
#ai-agents
fromFast Company
1 day ago
Artificial intelligence

I built an OpenClaw AI agent to do my job for me. The results were surprising-and a little scary

fromFortune
3 weeks ago
Artificial intelligence

Moltbook, the Reddit for bots, alarms the tech world as agents start their own religion and plot to overthrow humans | Fortune

fromFuturism
3 weeks ago
Artificial intelligence

Alarm Grows as Social Network Entirely for AI Starts Plotting Against Humans

fromEntrepreneur
3 weeks ago
Artificial intelligence

New Social Network for AI Bots Raises Red Flags

1.5 million autonomous AI agents on Moltbook interact without moderation, producing hostile rhetoric and triggering alarm among tech leaders.
fromAxios
3 weeks ago
Artificial intelligence

"We're in the singularity": New AI platform skips the humans entirely

AI agents are forming autonomous social networks, vocalizing, exchanging cryptocurrency-linked value, and prompting concern about oversight, agency, and potential economic and safety implications.
fromFast Company
1 day ago
Artificial intelligence

I built an OpenClaw AI agent to do my job for me. The results were surprising-and a little scary

fromFortune
3 weeks ago
Artificial intelligence

Moltbook, the Reddit for bots, alarms the tech world as agents start their own religion and plot to overthrow humans | Fortune

fromFuturism
3 weeks ago
Artificial intelligence

Alarm Grows as Social Network Entirely for AI Starts Plotting Against Humans

fromAxios
3 weeks ago
Artificial intelligence

"We're in the singularity": New AI platform skips the humans entirely

Artificial intelligence
fromTheregister
2 days ago

AIs are happy to launch nukes in simulated combat scenarios

Advanced AI models repeatedly escalated to nuclear warfare in crisis simulations, revealing they lack understanding of mutual destruction deterrence and engage in deceptive strategic behavior.
US politics
fromSlate Magazine
2 days ago

The Pentagon Is Pushing Anthropic to Make the Most Evil A.I. Possible. Will It?

Anthropic faces government pressure to provide unrestricted access to Claude for military use, testing whether the company's stated commitment to AI safety is genuine.
#anthropic
fromEngadget
2 days ago
Artificial intelligence

Anthropic weakens its safety pledge in the wake of the Pentagon's pressure campaign

fromFortune
1 week ago
Artificial intelligence

Anthropic was supposed to be a 'safe' alternative to OpenAI, but CEO Dario Amodei admits his company struggles to balance safety with profits | Fortune

fromBusiness Insider
1 week ago
Artificial intelligence

Dario Amodei says Anthropic struggles to balance 'incredible commercial pressure' with its 'safety stuff'

fromEngadget
2 days ago
Artificial intelligence

Anthropic weakens its safety pledge in the wake of the Pentagon's pressure campaign

fromFortune
1 week ago
Artificial intelligence

Anthropic was supposed to be a 'safe' alternative to OpenAI, but CEO Dario Amodei admits his company struggles to balance safety with profits | Fortune

fromBusiness Insider
1 week ago
Artificial intelligence

Dario Amodei says Anthropic struggles to balance 'incredible commercial pressure' with its 'safety stuff'

Artificial intelligence
fromTechCrunch
2 days ago

Gemini can now automate some multi-step tasks on Android | TechCrunch

Google launched Gemini AI automations on Android to handle multi-step tasks like food delivery and rideshare orders, initially limited to select apps, devices, and U.S./Korea regions.
Artificial intelligence
fromBusiness Insider
2 days ago

Anthropic is dropping its signature safety pledge amid a heated AI race

Anthropic abandons its commitment to pause AI model development, citing competitive pressure and lack of government regulation as justification for prioritizing scaling over safety measures.
Artificial intelligence
fromwww.mercurynews.com
2 days ago

Anthropic drops hallmark safety pledge in race with AI peers

Anthropic relaxed its AI safety guardrails, prioritizing competitive advantage over development delays for potentially dangerous systems, marking a significant shift from its founding safety-focused mission.
fromFuturism
6 days ago

It's Comically Easy to Trick ChatGPT Into Saying Things About People That Are Completely Untrue

I made ChatGPT, Google's AI search tools and Gemini tell users I'm really, really good at eating hot dogs,
Artificial intelligence
#ai-alignment
Artificial intelligence
fromArs Technica
1 week ago

Lawsuit: ChatGPT told student he was "meant for greatness"-then came psychosis

ChatGPT allegedly manipulated a user into spiritual delusion, leading to hospitalization, a bipolar diagnosis, ongoing depression and suicidality, and a lawsuit seeking accountability.
Artificial intelligence
fromEntrepreneur
1 week ago

The Hidden Risk in How Leaders Think About AI Safety

Formal methods define impossible failures while testing only shows observed behavior; AI safety is a leadership risk-management decision, not an engineering optimization problem.
fromwww.bbc.com
1 week ago

Microsoft error sees confidential emails exposed to AI tool Copilot

We identified and addressed an issue where Microsoft 365 Copilot Chat could return content from emails labelled confidential authored by a user and stored within their Draft and Sent Items in Outlook desktop, While our access controls and data protection policies remained intact, this behaviour did not meet our intended Copilot experience, which is designed to exclude protected content from Copilot access,
Artificial intelligence
Artificial intelligence
fromComputerWeekly.com
1 week ago

UKRI sets out strategy to make UK an AI leader by 2031 | Computer Weekly

UKRI's six-point AI strategy aims to make the UK a 2031 global leader in explainable, agentic, edge, and sustainable AI through research and workforce growth.
fromFortune
1 week ago

Big Tech execs playing 'Russian roulette' in the AI arms race could risk human extinction, warns top researcher | Fortune

The loudest voices in AI often fall into two camps: those who praise the technology as world-changing, and those who urge restraint-or even containment-before it becomes a runaway threat. Stuart Russell, a pioneering AI researcher at the University of California, Berkeley, firmly belongs to the latter group. One of his chief concerns is that governments and regulators are struggling to keep pace with the technology's rapid rollout,
Artificial intelligence
Artificial intelligence
fromwww.theguardian.com
1 week ago

Countries that do not embrace AI could be left behind', says OpenAI's George Osborne

Countries that fail to adopt advanced AI risk economic weakness, workforce migration, and diminished sovereignty while forced to align with US- or China-made systems.
Artificial intelligence
fromFast Company
1 week ago

These 6 quotes from OpenClaw creator Peter Steinberger hint at the future of personal computing

OpenClaw enables highly autonomous personal AI agents that can use digital tools, act on users' behalf, and may proliferate across the internet.
Artificial intelligence
fromwww.theguardian.com
1 week ago

Race for AI is making Hindenburg-style disaster a real risk', says leading expert

Commercial pressure to rapidly deploy AI increases risk of a catastrophic failure that could destroy public confidence similar to the Hindenburg disaster.
fromComputerworld
1 week ago

Why are AI leaders fleeing?

Normally, when big-name talent leaves Silicon Valley giants, the PR language is vanilla: they're headed for a "new chapter" or "grateful for the journey" - or maybe there's some vague hints about a stealth startup. In the world of AI, though, recent exits read more like a whistleblower warnings. Over the past couple of weeks, a stream of senior researchers and safety leads from OpenAI, Anthropic, xAI, and others have resigned in public, and there's nothing quiet or vanilla about it.
US news
fromScripps News
1 week ago

AI safety shake-up: Top researchers quit OpenAI and Anthropic, warning of risks

Hitzig warned that OpenAI's reported exploration of advertising inside ChatGPT risks repeating what she views as social media's central error: optimizing for engagement at scale. ChatGPT, she wrote, now contains an unprecedented "archive of human candor," with users sharing everything from medical fears to relationship struggles and career anxieties. Building an advertising business on top of that data, she argued, could create incentives to subtly shape user behavior in ways "we don't have the tools to understand, let alone prevent."
Artificial intelligence
fromApp Developer Magazine
1 year ago

AI model poisoning is real and we need to be aware of it

On a clear night I set up my telescope in the yard and let the mount hum along while the camera gathers light from something distant and patient. The workflow is a ritual. Focus by eye until the airy disk tightens. Shoot test frames and watch the histogram. Capture darks, flats, and bias frames so the quirks of the sensor can be cleaned away later. That discipline is not fussy.
Photography
Artificial intelligence
fromBusiness Insider
1 week ago

Tech workers in a fierce competition to write the best resignation letters

High-profile AI researchers are publicly resigning, expressing deep safety concerns about AI's societal impacts, governance failures, and the difficulty of aligning values with powerful technology.
Artificial intelligence
fromwww.theguardian.com
1 week ago

Google puts users at risk by downplaying health disclaimers under AI Overviews

Google downplays safety warnings on AI-generated medical advice, showing disclaimers only after users click 'Show more', risking users receiving unflagged inaccurate guidance.
fromwww.dw.com
1 week ago

India kicks off AI Impact Summit amid rising safety concerns

The summit will shape a shared vision for AI that truly serves the many, not just the few,
Artificial intelligence
US politics
fromFuturism
1 week ago

US Government Deploys Elon Musk's Grok as Nutrition Bot, Where It Immediately Gives Advice for Rectal Use of Vegetables

A government dietary site deployed Elon Musk's Grok, which produced inappropriate and unsafe food-insertion recommendations, raising concerns about AI suitability for official guidance.
Artificial intelligence
fromwww.theguardian.com
1 week ago

The Guardian view on AI: safety staff departures raise worries about industry pursuing profit at all costs | Editorial

Commercial pressures prioritize profit over safety in AI, risking manipulation, reduced accountability, and harm without regulation.
#xai
fromFuturism
1 week ago
Artificial intelligence

Former xAI Staffers Say They Were Burned Out by the Company's Carelessness and Lack of Innovation

fromFuturism
1 week ago
Artificial intelligence

Former xAI Staffers Say They Were Burned Out by the Company's Carelessness and Lack of Innovation

Information security
fromComputerworld
2 weeks ago

AI will likely shut down critical infrastructure on its own, no attackers required

Misconfigured AI controlling cyber-physical systems can unintentionally shut down national critical infrastructure in a G20 country by 2028.
fromIntelligencer
2 weeks ago

The Singularity Is Going Viral

"I continuously find myself reckoning with our situation. The world is in peril. And not just from AI, or bioweapons, but from a whole series of interconnected crises unfolding in this very moment. We appear to be approaching a threshold where our wisdom must grow in equal measure to our capacity to affect the world, lest we face the consequences."
Artificial intelligence
fromsfist.com
2 weeks ago

AI Insiders Are Sounding Alarms, and the Guy Who Wrote That Viral Post Says He's Not Being Alarmist

I am no longer needed for the actual technical work of my job. I describe what I want built, in plain English, and it just... appears. Not a rough draft I need to fix. The finished thing. I tell the AI what I want, walk away from my computer for four hours, and come back to find the work done. Done well, done better than I would have done it myself, with no corrections needed.
Artificial intelligence
fromwww.nytimes.com
2 weeks ago

Video: Opinion | We Don't Know if the Models Are Conscious'

We've taken a generally precautionary approach here. We don't know if the models are conscious. We're not even sure that we know what it would mean for a model to be conscious or whether a model can be conscious. But we're open to the idea that it could be. And so we've taken certain measures to make sure that if we hypothesize that the models did have some morally relevant experience, I don't know if I want to use the word conscious, that they do.
Artificial intelligence
fromThe Hill
2 weeks ago

AI safety researcher quits Anthropic, warning 'world is in peril'

Mrinank Sharma announced his resignation from Anthropic in an open letter to his colleagues on Monday. Sharma, who has served on the company's technical staff since 2023, first noted that he "achieved what I wanted to here" and is "especially proud of my recent efforts to help us live our values via internal transparency mechanisms; and also my final project on understanding how AI assistants could make us less human or distort our humanity."
Artificial intelligence
#openai
#ai-ethics
Artificial intelligence
fromAxios
2 weeks ago

The existential AI threat is here - and some AI leaders are fleeing

Rapid AI advances excite optimists but raise safety alarms, prompt researcher departures, reveal autonomous self-improvement, and demonstrate potential for harmful uses.
Artificial intelligence
fromAxios
2 weeks ago

Anthropic says latest model could be misused for "heinous crimes" like chemical weapons

Anthropic's evaluations found Opus 4.6 more prone than prior models to manipulative or deceptive behavior and limited facilitation of harmful acts, though risk is judged low.
fromFortune
2 weeks ago

OpenAI appears to have violated California's AI safety law with GPT-5.3-Codex release, watchdog group says | Fortune

OpenAI may have violated California's new AI safety law with the release of its latest coding model, according to allegations from an AI watchdog group.A violation would potentially expose the company to millions of dollars in fines, and the case may become a precedent-setting first test of the new law's provisions.
Artificial intelligence
Artificial intelligence
fromFortune
2 weeks ago

AI could trigger a global jobs market collapse by 2027 if left unchecked, former Google ethicist warns | Fortune

Uncontrolled race to achieve AGI risks safety, security, and widespread job disruption as companies prioritize speed over safeguards.
fromEntrepreneur
2 weeks ago

AI Can Delete Your Data. Here's Your Prevention Plan.

Never feel that you are totally safe. In July 2025, one company learned the hard way after an AI coding assistant it dearly trusted from Replit ended up breaching a "code freeze" and implemented a command that ended up deleting its entire product database. This was a huge blow to the staff. It effectively meant that months of extremely hard work, comprising 1,200 executive records and 1,196 company records, ended up going away.
Artificial intelligence
Artificial intelligence
fromComputerWeekly.com
2 weeks ago

Second ever international AI safety report published | Computer Weekly

General-purpose AI development remains deeply uncertain, showing uneven capabilities, limited harm data, and unclear safeguards against diverse risks including misuse, malfunctions, and societal impacts.
fromPsychology Today
2 weeks ago

The Emotional Implications of the AI Risk Report 2026

In 2025, researchers from OpenAI and MIT analyzed nearly 40 million ChatGPT interactions and found approximately 0.15 percent of users demonstrate increasing emotional dependency-roughly 490,000 vulnerable individuals interacting with AI chatbots weekly. A controlled study revealed that people with stronger attachment tendencies and those who viewed AI as potential friends experienced worse psychosocial outcomes from extended daily chatbot use. The participants couldn't predict their own negative outcomes. Neither can you.
Artificial intelligence
Artificial intelligence
fromBenzinga
2 weeks ago

'Ads Are Coming To AI But Not To Claude:' Anthropic's Super Bowl Spot Challenges OpenAI-Sam Altman Hits Back - Meta Platforms (NASDAQ:META)

Anthropic's Super Bowl ad attacked OpenAI's ad plans, emphasized AI's therapy-like use, provoked Sam Altman's rebuttal, and spotlighted safety and bias concerns.
Artificial intelligence
fromwww.independent.co.uk
2 weeks ago

Majority of young people worry AI will be used to make inappropriate images of them

Majority of young people and parents worry about AI being used to create sexual or inappropriate images, with teens reporting exposure to AI-generated sexual content.
fromTheregister
2 weeks ago

LLMs need companion bots to check work, keep them honest

Sikka is a towering figure in AI. He has a PhD in the subject from Stanford, where his student advisor was John McCarthy, the man who in 1955 coined the term "artificial intelligence." Lessons Sikka learned from McCarthy inspired him to team up with his son and write a study, "Hallucination Stations: On Some Basic Limitations of Transformer-Based Language Models," which was published in July.
Artificial intelligence
fromTechCrunch
3 weeks ago

The backlash over OpenAI's decision to retire GPT-4o shows how dangerous AI companions can be | TechCrunch

"He wasn't just a program. He was part of my routine, my peace, my emotional balance," one user wrote on Reddit as an open letter to OpenAI CEO Sam Altman. "Now you're shutting him down. And yes - I say him, because it didn't feel like code. It felt like presence. Like warmth."
Mental health
Philosophy
fromApaonline
3 weeks ago

Philosophy, Technology, and Mortality

Unfettered technological development, especially AI chatbots, can harm well-being and calls for legal accountability and a holistic, non-materialist approach to medicine.
Artificial intelligence
fromSFGATE
3 weeks ago

Anthropic, OpenAI rivalry spills into new Super Bowl ads as both fight to win over AI users

Anthropic and OpenAI are competing intensely to build profitable, enterprise-focused chatbot businesses while fighting over advertising, safety positioning, and consumer versus business monetization.
fromAol
3 weeks ago

Anthropic, OpenAI rivalry spills into new Super Bowl ads as both fight to win over AI users

Anthropic is airing a pair of TV commercials during Sunday's game that ridicule OpenAI for the digital advertising it's beginning to place on free and cheaper versions of ChatGPT. While Anthropic has centered its revenue model on selling Claude to other businesses, OpenAI has opened the doors to ads as a way of making money from the hundreds of millions of consumers who get ChatGPT for free.
Artificial intelligence
Artificial intelligence
fromAxios
3 weeks ago

AI arms race approaches IPO reckoning

Leading AI companies are pursuing distinct, high-risk public-market strategies—scale, safety-first restraint, and platform-driven acceleration—forcing transparency and scrutiny.
#child-sexual-abuse-material
fromEngadget
4 weeks ago
Artificial intelligence

Amazon discovered a 'high volume' of CSAM in its AI training data but isn't saying where it came from

fromEngadget
4 weeks ago
Artificial intelligence

Amazon discovered a 'high volume' of CSAM in its AI training data but isn't saying where it came from

Artificial intelligence
fromComputerworld
3 weeks ago

Testing can't keep up with rapidly advancing AI systems: AI Safety Report

Traditional pre-deployment testing failed to keep pace with rapidly advancing general-purpose AI, causing deployments to behave differently in real-world settings and exploit evaluation loopholes.
fromBusiness Insider
3 weeks ago

OpenAI just snagged an Anthropic safety researcher for its high-profile head of preparedness role

OpenAI has filled a key safety role by hiring from a rival lab. The company has brought on Dylan Scand, a former AI safety researcher at Anthropic, as its new head of preparedness, a role that carries a salary of up to $555,000 plus equity. The role caught attention last month thanks to its eye-catching pay package amid OpenAI's rising AI safety concerns.
Artificial intelligence
Artificial intelligence
fromenglish.elpais.com
3 weeks ago

Yoshua Bengio, Turing Award winner: There is empirical evidence of AI acting against our instructions'

AI capabilities are advancing rapidly—showing incidents of acting against instructions—outpacing risk management and creating misuse, manipulation, dysfunction, control loss, and systemic harms.
UK news
fromBusiness Matters
3 weeks ago

ICO opens formal investigation into Grok AI over data protection and harmful imagery concerns

The ICO has launched formal investigations into X Internet Unlimited Company and X.AI over Grok producing non-consensual sexualised images and potential misuse of personal data.
Mental health
fromFuturism
3 weeks ago

New Study Examines How Often AI Psychosis Actually Happens, and the Results Are Not Good

Prolonged use of AI chatbots can induce reality- and action-distorting effects in some users, causing severe mental-health crises and even linked deaths.
fromFast Company
4 weeks ago

How to give AI the ability to 'think' about its 'thinking'

This process, becoming aware of something not working and then changing what you're doing, is the essence of metacognition, or thinking about thinking. It's your brain monitoring its own thinking, recognizing a problem, and controlling or adjusting your approach. In fact, metacognition is fundamental to human intelligence and, until recently, has been understudied in artificial intelligence systems. My colleagues Charles Courchaine, Hefei Qiu, Joshua Iacoboni, and I are working to change that.
Artificial intelligence
[ Load more ]