What if I told you that everything you know and everything you do to ensure quality backups is no longer viable? In fact, what if I told you that in an era of generative AI, when it comes to backups, we're all pretty much screwed?
QR codes are two-dimensional images with glyphs of various sizes that store not just numbers, but text. When scanned, your phone extracts the encoded information and can act on it. For example, QR codes often embed URLs, allowing you to scan, say, a parking meter to launch a webpage where you can pay online.
Session replay tools capture different types of user actions. Some tools focus on DOM-level signals like clicks, scrolls, and heatmaps. Others provide full video-style replays of user sessions. Because capabilities vary so widely, you need to understand exactly what data a tool collects and the privacy risk that comes with it.
In 1973, long before the modern digital era, the US Department of Health, Education, and Welfare (HEW) published a report called "Records, Computers, and the Rights of Citizens." Networked computers seemed "destined to become the principal medium for making, storing, and using records about people," the report's foreword began. These systems could be a "powerful management tool." But with few legal safeguards, they could erode the basic human right to privacy - particularly "control by an individual over the uses made of information about him."
A High Court judge has ruled that thousands of people affected by a major data breach at Capita can continue with their legal action against the outsourcing group, in a decision being described as a landmark for large-scale data privacy claims in the UK. In a judgment handed down on 9 February, Master Dagnall rejected arguments from Capita's legal team that solicitors acting for more than 8,000 claimants had abused the court process.
These days, the internet "looks a hell of a lot more like Las Vegas than 'Little House on the Prairie.'" That's how Andrew Ferguson, chair of the Federal Trade Commission, described the online experience of children in his opening remarks for an FTC workshop on age verification last week. The event took place on Wednesday, January 28, which also happened to be Data Privacy Day, an annual "holiday" of sorts to raise awareness about privacy issues and encourage better data protection practices.
In video comments, the U.S. Attorney General Pam Bondi said, "Make no mistake, under President Trump's leadership and this administration, you have the right to worship freely and safely. And if I haven't been clear already, if you violate that sacred right, we are coming after you." So people have a First Amendment right to worship that DOJ will protect, but journalists suddenly have no First Amendment right to report on issues of public interest and concern? We disagree.
Parents and guardians in the UAE are now legally required to supervise their children's online activity under the country's new Child Digital Safety Law, which transforms digital safety from guidance into enforceable responsibility. The legislation applies not only to families but also to global platforms used by children in the UAE, even if those companies have no physical presence in the country.
The UK government has announced a consultation, asking people for their feedback on whether to introduce a social media ban for children under 16 years old. It would also explore how to enforce that limit, how to limit tech companies from being able to access children's data and how to limit "infinite scrolling," as well as access to addictive online tools.
According to information provided to Reuters, India is considering a new security requirement that could require smartphone manufacturers to share their source code with the state. The proposal is part of a package of 83 security standards designed to strengthen protection against data breaches and fraud. The requirements include that manufacturers must allow Indian authorities to review the source code in special test labs and notify the government before major software updates are released.
To all employees, this company takes data protection very seriously. It has a material impact on our operations. The CIO and IT Director are in charge of those policies. If one of them comes to your business unit and gives you an instruction, take it as seriously as you would instructions from any other C-level, including myself. As of this date, know this: If you disregard or otherwise violate any IT instruction, you better pray that they are wrong.
Researchers have developed a tool that they say can make stolen high-value proprietary data used in AI systems useless, a solution that CSOs may have to adopt to protect their sophisticated large language models (LLMs). The technique, created by researchers from universities in China and Singapore, is to inject plausible but false data into what's known as a knowledge graph (KG) created by an AI operator. A knowledge graph holds the proprietary data used by the LLM.
Large language models (LLMs) base their predictions on training data and cannot respond effectively to queries about other data. The AI industry has dealt with that limitation through a process called retrieval-augmented generation (RAG), which gives LLMs access to external datasets. Google's AI Overviews in Search, for example, use RAG to provide the underlying Gemini model with current, though not necessarily accurate, web data.
The prospects for phishing in the era of AI could be huge. We've (arguably) moved well beyond requests for money from fake nation state princes, we're now in place where all message formats (emails, audio messages or video messages) can faked. "We are going to have to have multiple trusted channels with those who are close to us. If one channel, email, WhatsApp, Slack, etc. gets an important message, you may need to validate this on another channel.
It also includes a requirement for parental controls, and for protection of data that describes minors. One Article in the draft addresses how AI companions interact with the elderly: The draft also calls for AI companions to remind users they are not interacting with a human every two hours, and for providers of such systems to provide advance notice of outages.
The EU has extended its adequacy decision, allowing data sharing with and from the UK under the General Data Protection Regulation for at least six more years. This will be some relief to techies in the UK and the member state block and beyond whose work or product set depends on the frictionless movement of data between the two, especially as they can point to the 2031 expiration date as a risk managing aspect to backers and partners. But the move does have its critics.
FC Barcelona has issued a strong denial in response to claims from the platform Som un Clam, stating that the club is not using its resources to influence the vote of its members or conduct electoral campaigns. In an official statement, Barcelona clarified that "the club is not carrying out any opinion surveys among members, neither internally nor externally." The statement added that any polls currently circulating are completely independent of the club, and reminded that the personal data of members is protected under current data protection legislation.
Freemasonry has the highest moral and ethical standards standards that have been a cornerstone of its identity since the earliest days of organised Freemasonry over 300 years ago. The decision by the Metropolitan Police casts an aura of mistrust over the entire Freemason community. Given the obvious, detrimental impact on our members, United Grand Lodge of England, Order of Women Freemasons and Honourable Fraternity of Ancient Freemasons consider that we now have no choice but to take legal action to challenge this unlawful decision.
The groups cite a "high volume" of data errors linked to the eVisa scheme, which they say amount to both operational failures and serious data protection breaches. In one documented case referenced in the letter, the passport details, contact information, and immigration status of a Canadian citizen were wrongly disclosed to a Russian woman. Other failures have seen migrants locked out of their eVisa accounts, with no effective support from the Home Office and no clear way to escalate urgent issues.
The new app in the ServiceNow Store offers bidirectional, policy-driven backup and recovery orchestration directly within the ServiceNow AI Platform. Users get full auditability, real-time status synchronization, and compliance reporting. ServiceNow users can monitor, orchestrate, and automate Veeam-powered data protection without leaving the platform. With this app, Veeam is primarily targeting highly regulated sectors such as manufacturing, healthcare, pharmaceuticals, and finance. Companies that want to provide their teams with self-service data security and automation are also part of the target group.
"We know frontline staff want to get this right but are struggling with lack of resource and guidance. Improving this process starts at the beginning - when a child enters the care system, their information should be recorded with their rights in mind, knowing that they may request it later," he said in a statement.
Britain's data protection regulator issued 17 preliminary enforcement notices and sent warning letters to hundreds of website operators throughout 2025, a pressure campaign that brought 979 of the UK's top 1,000 websites into compliance with cookie consent rules and gave an estimated 40 million people-roughly 80% of UK internet users over age 14-greater control over how they are tracked for personalized advertising.
The Royal Borough of Kensington and Chelsea and Westminster City council, which share some IT infrastructure, said a number of systems had been affected across both authorities, including phone lines. The councils, which provide services for 360,000 residents, shut down several computerised systems as a precaution to limit further possible damage. Engineers at RBKC worked through the night on Monday, when the incident occurred, and Tuesday.
A manager at Children's Health Ireland who is challenging her dismissal for suspending children from the spinal surgery waiting list reported "serious concerns" that breaches of data protection concerned information about child patients and their families.
A "discriminatory" artificial intelligence (AI) model used by Sweden's social security agency to flag people for benefit fraud investigations has been suspended, following an intervention by the country's Data Protection Authority (IMY). Starting in June 2025, IMY's involvement was prompted after a joint investigation from Lighthouse Reports and Svenska Dagbladet (SvB) revealed in November 2024 that a machine learning (ML) system being used by Försäkringskassan, Sweden's Social Insurance Agency was disproportionally and wrongly flagging certain groups for further investigation over social benefits fraud.
Dell Technologies is bringing innovations to its PowerProtect appliances. The solutions are designed to help companies respond more quickly to cyberattacks by improving protection, enabling intelligent automation, and providing greater flexibility. IT professionals are increasingly concerned about disruptive cyberattacks. Cyber resilience is central to the enhancements. PowerProtect now integrates with Dell NativeEdge for edge computing. The solution also supports Nutanix Hyper-Converged environments via Prism Central.