In line with our AI Principles, we're thrilled to announce that New Relic has obtained ISO/IEC 42001:2023 (ISO 42001) certification in the role of an AI developer and AI provider. This achievement reflects our commitment to developing, deploying, and providing AI features both responsibly and ethically. The certification was performed by Schellman Compliance, LLC, the first ANAB accredited Certification Body based in the United States.
This Is for Everyone reads like a family newsletter: it tells you what happened, recounting the Internet's origin and evolution in great detail, but rarely explaining why the ideal of a decentralized Internet was not realized. Berners-Lee's central argument is that the web has strayed from its founding principles and been corrupted by profit-driven companies that seek to monetize our attention. But it's still possible to "fix the internet", he argues, outlining a utopian vision for how that might be done.
The initiative includes European Business Wallets, which the European Commission (EC) said will offer companies a single digital identity to simplify paperwork and make it much easier to do business across EU member states. Valdis Dombrovskis, commissioner for economy and productivity, said: "Today's proposal represents an important first step in our digital simplification agenda, aiming to create a more favourable business environment for European companies."
AI has rapidly become a reliable coding assistant for many developers -- so much so that many are wondering about the future of the entire profession. Entry-level coding jobs are dwindling for recent grads as teams offload junior tasks to AI assistants; at the same time, experts cite the real limitations of these tools as proof that engineers will never actually become obsolete.
The guidelines advocate using existing legal frameworks like the Information Technology Act and the Digital Personal Data Protection Act to handle emerging risks such as deepfakes and unauthorized data use. "India's AI governance adopts a balanced, agile and flexible approach that promotes innovation and safety," Amal Mohanty, AI policy expert and one of the lead authors of the guidelines, told DW. He said India's approach is different to the rules charted by the EU, China and the US. "Unlike the EU's detailed AI Act that imposes strict, rule-based obligations based on risk classification, ours favors self-regulation and voluntary measures," he said. "This allows developers more freedom to innovate while managing AI risks responsibly."
Chief People Officer Gina Vorgiu Breuer explains how the enterprise software giant is preparing its global workforce for a future where AI handles 42% of tasks-without replacing humans. At SAP Connect in Las Vegas, Breuer outlined SAP's comprehensive AI workforce strategy to Techzine TV. The approach goes beyond simply introducing new tools; it fundamentally reshapes how the company thinks about jobs, skills, and human potential in an AI-augmented workplace.
World's first AI minister set to give birth' to 83 children' Albania's prime minister, Edi Rama, has announced that Diella, the world's first AI minister, is pregnant with 83 children. Speaking in Berlin, Mr Rama said that Diella will soon give birth to the children. who will assist individual members of parliament. These children will have the knowledge of their mother, he said. Their roles will include participating in parliamentary sessions, maintaining records, informing MPs on how to react, and summarising discussions.
Those AI tools are being trained on our trade secrets. We'll lose all of our customers if they find out our teams use AI. Our employees will no longer be able to think critically because of the brain rot caused by overreliance on AI. These are not irrational fears. As AI continues to dominate the headlines, questions about data privacy and security, intellectual property, and work quality are legitimate and important.
EY's newly released 2025 Technology Risk Pulse Survey, based on responses from more than 400 U.S. executives at companies with over $1 billion in annual revenue, reveals a growing gap between finance and technology leaders on AI priorities. According to additional data shared with CFO Daily, 56% of CFOs vs. 70-72% of CIOs and CTOs say AI integration is a top priority over the next two to four years.
To avoid a chaotic, ad hoc and, above all, risky AI implementation, all of your people need to be onboarded to your AI strategy and vetted collection of AI tools. It's also great for your people: Research shows that folks who understand AI are more likely to engage with AI tools and less likely to view AI with fear or mistrust. And given the 82% of leaders planning to expand the capacity of their workforce with digital labor, there's no time like the present.
We quickly identified the transformative impact that AI could deliver across our organisation, and over the last few years have put in place the assurance frameworks and tools we need to deploy AI safely and at scale. "With these foundations in place, we're reimagining how we operate by embedding AI across our business to drive smarter decisions, faster outcomes and better experiences.
A new report from IFS - a provider of industrial artificial intelligence (AI) software - said there is an "invisible revolution" in which the focus is shifting from productivity-led AI experimentation to "embedded, operational AI across core business processes." The report, titled "The IFS Invisible Revolution Study 2025," surveyed more than 1,700 senior decisionmakers at industrial enterprises around the world. The report noted what IFS refers to as an "execution gap," in which companies moved into AI faster than their team members can upskill.
"If we look back on the last 10, 15 years on social media, I think we'd be hard pressed to say that the velocity and the impact and the adverse effect of social media is equal to, or more than, the benefits that have occurred," he said. "And one of the reasons is the fact that there wasn't regulation, and the regulation that has come is too late." He said AI is progressing so fast and "the regulators are so far behind, they don't even know what the questions are because of the speed of this thing."
Many businesses have had to learn in recent years that adopting AI to automate certain organizational tasks or employees' day-to-day workflows won't necessarily translate to financial gain. The technology may make workers more productive in some respects, but it also presents a whole host of risks -- some of them involving cybersecurity, some of them legal, some of them psychological. In some cases, AI actually creates more work for supervisors.
"As we roll out age-gating more fully and as part of our 'treat adult users like adults' principle, we will allow even more, like erotica for verified adults," Altman writes. Earlier this month, OpenAI hinted at allowing developers to create "mature" ChatGPT apps after it implements the "appropriate age verification and controls." OpenAI isn't the only company dipping into erotica, as Elon Musk's xAI previously launched flirty AI companions, which appear as 3D anime models in the Grok app.
Ten major philanthropic organizations are banding together to ensure that regular Americans, not just a small group of tech billionaires, have a say in how AI will shape society and who will benefit. The organizations announced Tuesday the formation of Humanity AI, a $500-million five-year initiative aimed at ensuring artificial intelligence serves people and communities rather than replacing or diminishing them.
At Fortune, we've spent almost a century studying what separates the good leaders from the great ones; the ones who don't just survive disruption, but shape it. The next wave of corporate chiefs is emerging from a radically different playbook. They're products of an economy defined by technological acceleration, and operate with fluency across disciplines that didn't even exist in the CEO vocabulary a decade ago: data science, AI governance, cybersecurity, social trust, geopolitical volatility, and shifting expectations of what leadership should look like.
We study AI and democracy. We're worried about 2050, not 2026. Half of humanity lives in countries that held national elections last year. Experts warned that those contests might be derailed by a flood of undetectable, deceptive AI-generated content. Yet what arrived was a wave of AI slop: ubiquitous, low quality, and sometimes misleading, but rarely if ever decisive at the polls.
AI tools, such as chatbots, promise speed, savings and scalability. But behind each successful interaction, there's a less visible truth: when AI systems operate without active oversight, they silently accumulate risk. These hidden liabilities-spanning brand damage, operational drag, ethical concerns and cybersecurity gaps-often remain undetected until a public crisis erupts. Here are three real-world cases of AI assistant deployment. Each began as a quick win. Each revealed what happens when governance is an afterthought.
According to Rajat Taneja, Visa's president of technology, the global payments company has woven AI into every part of its business. Employees across Visa are tapping AI in their everyday workflows for tasks ranging from data analysis to software development. The company has built more than 100 internal AI-powered business applications tailored to specific use cases and has over 2,500 engineers working specifically on AI. Visa is also using AI to create new products and services for its customers, such as faster onboarding, simplified processes for managing disputes, and infrastructure for agentic AI technologies.
Lisa, Jennie, Rosé, and Jisoo have broken numerous records since their debut in 2016: the first to sell one million, then two million, album copies in South Korea; the first Korean group to top the Billboard 200 album chart; the highest-grossing concert tour by a female artist. Blackpink, and K-pop and K-culture more broadly, are now a source of South Korean "soft power," expanding the country's cultural influence across Asia and beyond.
As AI adoption accelerates, the consequences-intended and not-are becoming harder to ignore. From biased algorithms to opaque decision-making and chatbot misinformation, companies are increasingly exposed to legal, reputational, and ethical risks. And with the rollback of federal regulation, many are navigating this landscape with fewer guardrails. But fewer guardrails doesn't mean fewer consequences-only that the burden of responsibility shifts more squarely onto the businesses deploying these systems. Legal, financial, and reputational risks haven't disappeared; they've just moved upstream.
Over 40 minutes, the panel returned again and again to three themes: data quality, organizational alignment and cultural readiness. The consensus was clear: AI doesn't create order from chaos. If organizations don't evolve their culture and their standards, AI will accelerate dysfunction, not fix it. Clean data isn't optional anymore Allen set the tone from the executive perspective. He argued that enterprises must build alignment on high-quality, structured and standardized data within teams and across workflows, applications and departments.
Hallucinations have commonly been considered a problem for generative AI, with chatbots such as ChatGPT, Claude, or Gemini prone to producing 'confidently incorrect' answers in response to queries. This can pose a serious problem for users. There are several cases of lawyers, for example, citing non-existent cases as precedent or presenting the wrong conclusions and outcomes from cases that really do exist. Unfortunately for said lawyers, we only know about these instances because they're embarrassingly public, but it's an experience all users will have had at some point.
Every Fortune 500 CEO investing in AI right now faces the same brutal math. They're spending $590-$1,400 per employee annually on AI tools while 95% of their corporate AI initiatives fail to reach production. Meanwhile, employees using personal AI tools succeed at a 40% rate. The disconnect isn't technological-it's operational. Companies are struggling with a crisis in AI measurement.
Since the AI boom kicked off with ChatGPT's debut about three years ago, the technology's breathtaking capabilities have amazed the world. Tech companies have raced to develop better AI systems even as experts warn of its risks, including existential threats like engineered pandemics, large-scale misinformation or rogue AIs running out of control, and call for safeguards.The U.N.'s adoption of a new governance architecture is the latest and biggest effort to rein in AI.