A United Nations expert warned about an “alarming” trend of “using security rhetoric” to justify “intrusive and high-risk technologies,” including artificial intelligence, to spy on social rights activists and journalists.
U.N. expert Fionnuala Nà Aoláin called for a moratorium on AI development, among other advanced technologies like drones, until “adequate safeguards are in place,” according to a March 2023 report that was presented to the Human Rights Council.
“Exceptional justifications for the use of surveillance technologies in human rights ‘lite’ counter-terrorism often turn into mundane regular use,” Nà Aoláin said in a statement after the report’s release.
Without meaningful oversight, she argued, countries and private actors can use AI-power tech with impunity “under the guise of preventing terrorism.”
WHO IS WATCHING YOU? AI CAN STALK UNSUSPECTING VICTIMS WITH ‘EASE AND PRECISION’: EXPERTS
Generative AI has the potential to create a utopia, or the power to plunge a country into a dystopia, experts have claimed.
“AI is one of the more complex issues we have ever tried to regulate,” Kevin Baragona, founder of DeepAI.org, told Fox News Digital in a previous interview. “Based on current governments’ struggle to regulate simpler issues, it’s looking hard to be optimistic we’ll get sensible regulation.”
AI-ASSISTED FRAUD SCHEMES COULD COST TAXPAYERS $1 TRILLION IN JUST 1 YEAR, EXPERT SAYS
AI was among a handful of “high-risk technologies” that she discussed. The topic was broken out as its own subsection in the 139-page report.
“AI has the properties of a general-purpose technology, meaning that it will open up wide-ranging opportunities for application,” she wrote in her report.
WHAT ARE THE DANGERS OF AI? FIND OUT WHY PEOPLE ARE AFRAID OF ARTIFICIAL INTELLIGENCE
At the heart of AI are algorithms that can create profiles of people and predict likely future movements by utilizing vast amounts of data – including historic, criminal justice, travel and communications, social media and health info.
It can also identify places as “likely sites of increased criminal or terrorist activity” and flag individuals as alleged suspects and future re-offenders, according to Nà Aoláin’s report.
“AI assessments alone should not be the basis for reasonable suspicion given its inherently probabilistic nature.”