Exclusive: Here are Custom and Border Protection's rules for using AI
Briefly

Exclusive: Here are Custom and Border Protection's rules for using AI
"The directive, obtained through a public records request, spells out CBP's internal procedures for sensitive deployments of the technology. Agency officials are banned from using AI for unlawful surveillance, according to the document, which also says that AI cannot be used as a "sole basis" for a law enforcement action, or to target or discriminate against individuals. The document includes myriad procedures for introducing all sorts of artificial intelligence tools, and indicates that CBP has a detailed approach to deploying AI."
"Yet those rules also include several workarounds, raising concerns that the technology could still be misused, particularly amid the militarization of the border and an increasingly violent deportation regime, sources tell Fast Company. And then there's the matter of whether and how the directive is actually enforced. According to the directive, the agency is required to use AI in a "responsible manner" and maintain a "rigorous review and approval process.""
"It also discusses special approvals needed for deploying "high-risk" AI and how the agency internally handles reports that officials are using the tech for a "prohibited" application. The document has a warning for CBP staff that work with generative AI, too. "All CBP personnel using AI in the performance of their official duties should review and verify any AI-generated content before it is shared, implemented, or acted upon," the directive states."
Customs and Border Protection aims to create a framework for strategic artificial intelligence use and to outline rules for safe, secure deployments. Agency officials are banned from using AI for unlawful surveillance, and AI cannot serve as the sole basis for a law enforcement action or to target or discriminate against individuals. The directive mandates responsible use, a rigorous review and approval process, inventorying of AI applications, and special approvals for high‑risk systems. The agency requires internal handling of reports of prohibited uses and instructs personnel to verify and be accountable for AI‑generated content. The rules contain workarounds that could enable misuse, and enforcement mechanisms remain uncertain.
Read at Fast Company
Unable to calculate read time
[
|
]