A Stanford University study on artificial intelligence (AI) and its use in society released Tuesday, Sept. 6, reported that “predictive policing” technology will emerge and become the norm in American cities within the next 15 years.
Findings of the report, entitled “Artificial Intelligence and Life in 2030”, were compiled by a panel of AI experts, who agreed that while the technology isn’t a threat to humanity in its current form, policies are needed that promote “helpful innovation . . . and civic responsibility for addressing societal issues”.
“In the long-term, AI will enable new wealth creation that will require social debate on how the economic fruits of AI technology should be shared,” the study concluded.
Perhaps one of the most interesting aspect of AI’s application to public life, according to the panel, is the use of “machine learning” technology, which will be installed in cybersecurity software, as well as surveillance cameras to identify both online and street crimes.
Security cameras with built-in AI software, for example, will be able to “detect anomalies pointing to a possible crime”. The reports also says, however, that these techniques may “lead to even more widespread surveillance.”
Social media scanning programs will also become the norm for some law enforcement organizations who are charged with preventing radicalization and recruitment by extremist groups and detecting online terrorist plots.
A related technique being developed is crowd monitoring at large events, used to determine potential security risks and “crowd simulations” for crowd control.
Finally, AI experts point to development of new security technology by federal agencies, such as a new Transportation Security Administration (TSA) project called Dynamic Aviation Risk Management System (DARMS). DARMS will presumably use biometric identification technology built into tunnels at airport terminals which travelers walk through to assess one’s threat-risk level.
Biometric identification includes facial and vocal recognition, hand and earlobe geometry and retina and iris scans.
Ultimately, the AI panel recommends a balanced approach to this new era of policing and security technology as “no machines with self-sustaining long-term goals and intent have been developed”.
“If society approaches these technologies primarily with fear and suspicion, missteps that slow AI’s development or drive it underground will result, impeding important work on ensuring the safety and reliability of AI technologies,” the report warned.
[The Sun] [Photo courtesy tuvie.com]