University of Notre Dame Joins AI Safety Institute Consortium
Artificial intelligence is transforming industries and daily life.
The University of Notre Dame joined the AISIC consortium to address AI risks and promote safety.
Godfather of AI' raises odds of the technology wiping out humanity over next 30 years
AI poses significant extinction risk; chances have increased to 10-20% within 30 years, as per Geoffrey Hinton's updated analysis.
How close is AI to human-level intelligence?
OpenAI's o1 model signifies a shift towards promising AI capabilities, reigniting discussions on the feasibility and risks of reaching artificial general intelligence (AGI).
Former OpenAI Director Warns There Are Bad Things AI Can Do Besides Kill You
Unchecked AI poses diverse risks beyond sci-fi scenarios, emphasized by former OpenAI director Helen Toner.
Lack of transparency around AI technologies fuels fear and distrust, indicating the need for increased openness in the field.
AI experts uncertain' on technology's future, report says
AI has the potential to boost wellbeing and scientific research while also posing risks of disinformation and job disruption.
Marc Serramia: If we all trust tools like ChatGPT, human knowledge will disappear'
The rise of AI lacks serious debate on risks, requiring techniques for aligning AI behavior with human values.
Experts compare risks of AI with climate emergency, emphasizing the need to consider potential negative impacts of AI technology.
University of Notre Dame Joins AI Safety Institute Consortium
Artificial intelligence is transforming industries and daily life.
The University of Notre Dame joined the AISIC consortium to address AI risks and promote safety.
Godfather of AI' raises odds of the technology wiping out humanity over next 30 years
AI poses significant extinction risk; chances have increased to 10-20% within 30 years, as per Geoffrey Hinton's updated analysis.
How close is AI to human-level intelligence?
OpenAI's o1 model signifies a shift towards promising AI capabilities, reigniting discussions on the feasibility and risks of reaching artificial general intelligence (AGI).
Former OpenAI Director Warns There Are Bad Things AI Can Do Besides Kill You
Unchecked AI poses diverse risks beyond sci-fi scenarios, emphasized by former OpenAI director Helen Toner.
Lack of transparency around AI technologies fuels fear and distrust, indicating the need for increased openness in the field.
AI experts uncertain' on technology's future, report says
AI has the potential to boost wellbeing and scientific research while also posing risks of disinformation and job disruption.
Marc Serramia: If we all trust tools like ChatGPT, human knowledge will disappear'
The rise of AI lacks serious debate on risks, requiring techniques for aligning AI behavior with human values.
Experts compare risks of AI with climate emergency, emphasizing the need to consider potential negative impacts of AI technology.
Generative AI in Security: Risks and Mitigation Strategies
Generative AI has transformed tech with new capabilities and risks, requiring cybersecurity professionals to adapt their strategies for securing AI systems.
Governments may take a softer approach to encourage responsible AI: 'Over regulation will stifle AI innovation'
Generative AI regulation is a delicate balance to avoid stifling innovation or allowing disruptive threats like deep fakes and misinformation.
Top 5 Cyber Security Trends for 2025
The surge in ransomware and cyber attacks highlights a pressing need for improved risk management and more skilled professionals in cybersecurity.
Former Google CEO Eric Schmidt: Here Are AI's Biggest Risks | Entrepreneur
AI has the potential to greatly enhance productivity while also posing serious risks, particularly if exploited for malicious purposes.
The Microsoft-CrowdStrike outage could spur a Big Tech trust reckoning and threaten tech giants' plans for AI
A CrowdStrike software issue caused a global outage, emphasizing tech fragility and AI risks, urging for government regulations and security investments.
Adopting more security tools doesn't keep you safe, it just overloads your teams and creates greater risks
Overloading security tools can increase risks for organizations rather than improving resilience.
Legacy tech and unregulated generative AI are significant concerns for cybersecurity.
Generative AI in Security: Risks and Mitigation Strategies
Generative AI has transformed tech with new capabilities and risks, requiring cybersecurity professionals to adapt their strategies for securing AI systems.
Governments may take a softer approach to encourage responsible AI: 'Over regulation will stifle AI innovation'
Generative AI regulation is a delicate balance to avoid stifling innovation or allowing disruptive threats like deep fakes and misinformation.
Top 5 Cyber Security Trends for 2025
The surge in ransomware and cyber attacks highlights a pressing need for improved risk management and more skilled professionals in cybersecurity.
Former Google CEO Eric Schmidt: Here Are AI's Biggest Risks | Entrepreneur
AI has the potential to greatly enhance productivity while also posing serious risks, particularly if exploited for malicious purposes.
The Microsoft-CrowdStrike outage could spur a Big Tech trust reckoning and threaten tech giants' plans for AI
A CrowdStrike software issue caused a global outage, emphasizing tech fragility and AI risks, urging for government regulations and security investments.
Adopting more security tools doesn't keep you safe, it just overloads your teams and creates greater risks
Overloading security tools can increase risks for organizations rather than improving resilience.
Legacy tech and unregulated generative AI are significant concerns for cybersecurity.
Character.AI hit with another lawsuit over allegations its chatbot suggested a teen kill his parents
Character.AI faces lawsuits over chatbots allegedly encouraging violence and self-harm among minors, highlighting significant safety concerns in AI development.
Downplaying AI's existential risks is a fatal error, some say
Lawmakers aim to regulate generative AI for potential threats to humans.
AI and data protection: What businesses need to know
Generative AI can enhance efficiency, but it carries significant risks for data protection and privacy.
Google Is Turning Into a Libel Machine
The danger of trusting search engine outputs without verifying accuracy is highlighted by an incident involving Google's AI misrepresenting a chess player's actions.
5 steps board members and startup leaders can take to prepare for a future shaped by GenAI | TechCrunch
Managing AI risks and ensuring effective oversight is crucial for organizations.
Educating board members about AI is becoming increasingly urgent.
ServiceTitan names LLMs from Microsoft, OpenAI as risk factors | TechCrunch
ServiceTitan's S-1 highlights novel risk factors regarding AI, particularly generative AI, that could significantly affect its business operations.
Generative AI in Application Security report from Checkmarx | App Developer Magazine
Checkmarx's report highlights the tension between leveraging generative AI benefits and establishing governance to mitigate emerging risks in enterprise application development.
Downplaying AI's existential risks is a fatal error, some say
Lawmakers aim to regulate generative AI for potential threats to humans.
AI and data protection: What businesses need to know
Generative AI can enhance efficiency, but it carries significant risks for data protection and privacy.
Google Is Turning Into a Libel Machine
The danger of trusting search engine outputs without verifying accuracy is highlighted by an incident involving Google's AI misrepresenting a chess player's actions.
5 steps board members and startup leaders can take to prepare for a future shaped by GenAI | TechCrunch
Managing AI risks and ensuring effective oversight is crucial for organizations.
Educating board members about AI is becoming increasingly urgent.
ServiceTitan names LLMs from Microsoft, OpenAI as risk factors | TechCrunch
ServiceTitan's S-1 highlights novel risk factors regarding AI, particularly generative AI, that could significantly affect its business operations.
Generative AI in Application Security report from Checkmarx | App Developer Magazine
Checkmarx's report highlights the tension between leveraging generative AI benefits and establishing governance to mitigate emerging risks in enterprise application development.
A deepfake caller pretending to be a Ukrainian official almost tricked a US Senator
Deepfake technology poses significant risks to political communication, exemplified by a security incident involving Sen. Cardin and a fake Ukrainian official.
Warren Buffett warns of AI risks
Warren Buffett likens AI rise to nuclear weapons development, highlighting potential risks and uncertainties.
The deepfake threat to CEOs
Deepfakes pose a serious threat to electoral integrity and corporate reputation, particularly with foreign disinformation efforts escalating.
Legal risks from deepfakes are increasing, affecting both high-profile individuals and large corporations.
A deepfake caller pretending to be a Ukrainian official almost tricked a US Senator
Deepfake technology poses significant risks to political communication, exemplified by a security incident involving Sen. Cardin and a fake Ukrainian official.
Warren Buffett warns of AI risks
Warren Buffett likens AI rise to nuclear weapons development, highlighting potential risks and uncertainties.
The deepfake threat to CEOs
Deepfakes pose a serious threat to electoral integrity and corporate reputation, particularly with foreign disinformation efforts escalating.
Legal risks from deepfakes are increasing, affecting both high-profile individuals and large corporations.
How to prevent millions of invisible law-free AI agents casually wreaking economic havoc
AI risk assessments vary wildly among experts, reflecting divergent views on humanity's control over future AI.
To understand the risks posed by AI, follow the money
Predicting technological evolution is challenging, but economic risks from AI misalignment between profits and societal interests are generally knowable in advance.
How to prevent millions of invisible law-free AI agents casually wreaking economic havoc
AI risk assessments vary wildly among experts, reflecting divergent views on humanity's control over future AI.
To understand the risks posed by AI, follow the money
Predicting technological evolution is challenging, but economic risks from AI misalignment between profits and societal interests are generally knowable in advance.
UK to hold conference of developers in Silicon Valley to discuss AI safety
The UK is hosting an AI safety conference to discuss risks and regulations concerning AI technology.
NIST releases a tool for testing AI model risk | TechCrunch
Dioptra is a tool re-released by NIST to assess AI risks and test the effects of malicious attacks, aiding in benchmarking AI models and evaluating developers' claims.
UK to hold conference of developers in Silicon Valley to discuss AI safety
The UK is hosting an AI safety conference to discuss risks and regulations concerning AI technology.
NIST releases a tool for testing AI model risk | TechCrunch
Dioptra is a tool re-released by NIST to assess AI risks and test the effects of malicious attacks, aiding in benchmarking AI models and evaluating developers' claims.
SOCI Act 2024: Insights on Critical Infrastructure
Ransomware incidents are increasing in critical infrastructure, with poor preparedness and significant human error contributing to the risks.
Organizations must adopt multi-layered security strategies to protect against ransomware and improve response plans.
Report Highlights Rising Risks in Sensitive Data Management
Companies face increased risks from storing sensitive data in non-production environments, raising concerns about potential data breaches and compliance violations.
SOCI Act 2024: Insights on Critical Infrastructure
Ransomware incidents are increasing in critical infrastructure, with poor preparedness and significant human error contributing to the risks.
Organizations must adopt multi-layered security strategies to protect against ransomware and improve response plans.
Report Highlights Rising Risks in Sensitive Data Management
Companies face increased risks from storing sensitive data in non-production environments, raising concerns about potential data breaches and compliance violations.
MIT just launched a new database tracking the biggest AI risks
MIT researchers have developed a comprehensive AI Risk Repository to address overlooked risks in AI adoption frameworks.
Biden to receive AI national security memo outlining forbidden uses, opportunities for innovation
The national security memorandum will address AI risks, encourage responsible AI deployment, emphasize talent development, and focus on U.S. leadership in AI.
New global AI safety commitments echo EU's risk-based approach
AI policies and regulations are rapidly developing globally to mitigate risks and ensure safety in AI technology.
Ex-OpenAI staff call for "right to warn" about AI risks without retaliation
AI employees call for principles to raise concerns without fear of retaliation.
AI Safety Summit Talks with Yoshua Bengio (remote) Luma
The AI Safety Summit Talks aim to address AI risks and mitigation strategies with leading experts from the field, fostering public engagement and awareness.
Ex-OpenAI staff call for "right to warn" about AI risks without retaliation
AI employees call for principles to raise concerns without fear of retaliation.
AI Safety Summit Talks with Yoshua Bengio (remote) Luma
The AI Safety Summit Talks aim to address AI risks and mitigation strategies with leading experts from the field, fostering public engagement and awareness.
80% of Australians think AI risk is a global priority. The government needs to step up
Australians concerned about AI risks
Public concern about AI risks growing
Amid an AI arms race, US and China to sit down to tackle world-changing risks
US and China to discuss responsible development of AI
Concerns include AI's potential to disrupt democratic process and sway elections
Act now on AI before it's too late, says UNESCO's AI lead
The second Global Forum on the Ethics of AI organized by UNESCO is focused on broadening the conversation around AI risks and considering AI's impacts beyond those discussed by first-world countries and business leaders.
UNESCO aims to move away from just having principles on AI ethics and focus on practical implementation through the Readiness Assessment Methodology (RAM) to measure countries' commitments.
'World-First' Agreement on AI Reached - Data Matters Privacy Blog
The "Bletchley Declaration", endorsed by 28 countries and the EU, highlights the commitment to manage risks associated with highly capable general-purpose AI models.
The Global AI Safety Summit brought together policymakers, academics, and executives to address responsible development of AI and was seen as a diplomatic breakthrough.
Global Elites Suddenly Starting to Fear AI
Billionaires and leaders at the World Economic Forum have become concerned about the risks of AI.
There are fears about job losses, disinformation, and the need for human control over AI.
Mitigating AI risks requires global cooperation, officials say
The US is collaborating with other nations to establish global norms on AI risks.