Recently, U.S. District Judge Sara Ellis called attention to immigration agents employing artificial intelligence (AI) in crafting use-of-force reports, emphasizing potential inaccuracies and a decline in public trust regarding immigration enforcement in regions like Chicago, especially amidst ongoing protests.

In a pointed footnote within a substantial court ruling, Judge Ellis critiqued the lack of credibility in reports generated using ChatGPT, claiming such practices could explain discrepancies observed between official narratives and body camera footage.

Experts have weighed in, suggesting that depending on an AI to document an officer's experiences—notably when it leverages only basic inputs—represents a misuse of technology that poses serious implications for accuracy and privacy.

An Officer's Necessary Insight

The challenge for law enforcement agencies lies in balancing the integration of new AI technologies with the imperative for accuracy and professionalism. Ian Adams, a criminologist and task force member focused on AI, criticized the practice highlighted by Judge Ellis as contrary to best practices, depicting it as a 'nightmare scenario' for reporting.

Amid calls for systematic guidelines, some departments have imposed restrictions on using predictive AI for essential reports. Specifics around an officer's perspective are deemed critical when evaluating the appropriateness of their force application.

Concerns Over Privacy and Evidence Integrity

AI-generated reports not only raise factual concerns but also bear potential privacy hazards. Katie Kinsey from NYU School of Law pointed out that sharing sensitive images with a public AI model could compromise their confidentiality, alerting the potential for misuse. Moreover, the rush to adapt technology often sees law enforcement waiting for clear guidelines until after issues have already arisen.

Determining Effective Use of Technology

The incorporation of AI in policing is not straightforward, as firms like Axon have found that AI tends to perform better with sound inputs rather than images for producing reports. Further complicating matters, accuracy in characterizing visual components is not inherently reliable, leading to concerns about using these models within serious criminal contexts.

The ongoing discourse around AI's application in law enforcement demands careful consideration of the ethical, legal, and procedural implications, particularly as institutions struggle with rapid technological advancements.