Home > privacy, spying, US News, USA > Delaware Police Unveil Newest Security Cameras, But Citizens Revolt As Rights Questioned

Delaware Police Unveil Newest Security Cameras, But Citizens Revolt As Rights Questioned

 

Delaware police cruisers are being  equipped with a “dashcam on steroids.” Able to read license plates and identify faces, police call the upgraded security cameras an “extra set of eyes.” As the newest technology is unveiled, civil rights groups fear a citizen’s revolution over “profiling” and “misuse of data” questions.

“Smart” cameras hooked up with artificial intelligence are being rolled out with the goal of helping police recognize fugitives, missing children, and wandering seniors. According to David Hinojosa, “the video feeds will be analyzed using artificial intelligence to identify vehicles by license plate or other features,” he says. “We are helping officers keep their focus on their jobs.” His company, Coban Technologies, is installing the cutting edge equipment.

The Coban system acts as “a digital evidence hub” that can “consolidate digital evidence from multiple sources, including up to six HD quality cameras, body cameras, and other sources.” With FOCUS H1, the system allows third-party applications to “identify a wide range of objects, such as vehicle make and model, faces, weapons, dangerous movements or behaviors, and other artificial intelligence based applications.”

One of Hinojosa’s competitors, Deep Science, uses a similar system to “help retail stores detect in real time if an armed robbery is in progress, by identifying guns or masked assailants.” The company has several test projects running with American retailers to sound an alarm if it detects a robbery, fire or other detectable threats.

The biggest advantage comes from the computer’s unlimited attention span, “A common problem is that security guards get bored,” a Deep Science co-founder and former Pentagon engineer explains. Their technology, he says, “can monitor for threats more efficiently and at a lower cost than human security guards.”

Advances in the way computers recognize objects make it possible for algorithms to recognize weapons, specify vehicle makes and models, read license plates, and identify individuals as an aid to human law enforcement officers and security guards.

The biggest challenge used to be “video is unstructured, it’s not searchable, you had to go through hundreds of hours of video with fast forward and rewind.” That’s been solved, Israeli company Briefcam claims. “We detect, track, extract and classify each object in the video. So it becomes a database.” The benefit is since video camera feeds can now be monitored by bots in real time, authorities can “quickly find targets from video surveillance.”

Automation also eliminates missed recognition due to human inattention. “It’s not only saving time. In many cases, they wouldn’t be able to do it because people who watch video become ineffective after 10 to 20 minutes,” Amit Gavish from Briefcam says.

Computer graphics card maker Nvidia has two different “supercomputing modules,” Jetson and Metropolis, that nearly 50 individual partners use for security-related tasks. One partner, Umbo Computer Vision, “has developed an AI-enhanced security monitoring system which can be used at schools, hotels or other locations, analyzing video to detect intrusions and threats in real-time, and sending alerts to a security guard’s computer or phone.”

Nvidia’s product manager, Saurabh Jain, explains that “the same computer vision technologies are used for self-driving vehicles, drones, and other autonomous systems, to recognize and interpret the surrounding environment.”

A Russian startup, Vision Labs uses Nvidia’s facial recognition equipment to spot shoplifters in stores and identify unwanted “problem customers” in casinos. They can work with clients everywhere, project manager Vadim Killmnichenko boasts. “We can deploy this anywhere through the cloud.”

Banks are one place Vision Labs is deployed. Facial recognition tools are being used to prevent fraud by detecting “if someone is using a false identity.”

All this increased technology comes with a price. Marc Rotenberg is concerned about the privacy risks associated with the rapid growth in AI technology. “Some of these techniques can be helpful but there are huge privacy issues when systems are designed to capture identity and make a determination based on personal data,” the president of the Electronic Privacy Information Center warns. “That’s where issues of secret profiling, bias, and accuracy enter the picture.”

Rotenberg believes if AI systems are to be used in criminal investigations, they need to be tightly monitored to “ensure legal safeguards, transparency, and procedural rights.” Shelly Kramer of Futurum Research agrees. Noting that “AI holds great promise for law enforcement, be it for surveillance, scanning social media for threats, or using ‘bots’ as lie detectors,” she writes in her blog. “With that encouraging promise, though, comes a host of risks and responsibilities.”

 

  1. No comments yet.
  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: