weapons

Example of weapons used in the data collection process: (a) combat knives, (b) BB gun airsoft Galaxy G.26 (Sig 226), (c) BB gun airsoft Galaxy G.052 (M92), (d) BB gun airsoft Galaxy G29 (Makarov).

I came across a very interesting project related to Artificial Intelligence research a couple of days ago and thought it was worth sharing. By interesting in this case I actually mean a little scary. Over the past few years the US federal government has been funding research into AI based automatic weapons detection in surveillance videos. This research leads to an unexpected side effect.

First lets talk about how AI’s are trained to detect weapons in video feeds. Like all AIs they are trained by the use of models that require millions of labeled samples that allow the AI to learn the necessary pattern recognition to detect weapons. This means that you have to acquire millions of minutes of surveillance video of people carrying firearms, label the videos so the AI can recognize the weapons and feed these videos into a training system. The first step of this process was a real problem for the developers as there were very few public videos of people carrying weapons. It was the solution to this problem that created the unexpected side effect I mentioned earlier. If the video footage does not exist how do you train the AI? The answer is with simulated video footage.

Several months ago I wrote about deep-fakes, which is where AI is used to generate fake video footage of just about anything you can imagine. In a paper published in October 2022, “ACF: An Armed CCTV Footage Dataset for Enhancing Weapon Detection” by Narit Hnoohom, Pitchaya Chotivatunyu, and Anuchit Jopattanakul they approached the problem by utilizing “fake” weapons to train the algorithms in live videos. They were successful in reaching a 98% successful detection rate in their subset of four simple weapons, combat knives, Airsoft Galaxy G.26, G29 and G.052 BB guns. Their team was among the first to prove that an AI could be trained to recognize weapons. This lead to additional government funding to continue the research. Obviously they could not obtain the real weapons, nor the permits to own the full class of weapons, so the research turned in a different direction.

The research shifted from detection of weapons to the automatic generation of test footage for detection systems. Basically this means using AI to generate “fake” closed-circuit television (CCTV) footage of people carrying weapons. Some of the most popular AI video generator systems take simple inputs to generate the fake CCTV footage. For example you can ask the AI to generate a video of an 8-year old girl carrying a semi-automatic handgun in a school hallway, and you will get a fake CCTV video put together by an AI from public surveillance videos of real schools, real guns, and generated people. These simulated CCTV images are realistic enough to test AI weapons detection systems, and also realistic enough that most people will not recognize that they are fake.

The scary part of this was when I saw generated images by the algorithm that matched images I have seen on main-stream newscasts, which were of supposed real events. I won’t go into details on the events, but bring up the point to say that you cannot trust what you see on TV and the internet to be a real world event. I believe we will begin to see more and more “fake” video footage on the news. I don’t believe the primary reason, at least initially will be to tell “fake” news stories, but rather to protect the identities of innocent people in the real footage. For example if you were covering a school shooting “live” you might broadcast a fake video in the setting of the school to prevent a parent seeing their child shot on national television. However, I don’t believe it will stop there and we all need to be aware that the technology exists to create seemingly real scenarios on live TV.

We are living in interesting times, where it is becoming more and more difficult to recognize the truth. Until next week, stay safe and learn something new.

Scott Hamilton is an Expert in Emerging Technologies at ATOS and can be reached with questions and comments via email to sh*******@**********rd.org or through his website at https://www.techshepherd.org.

Share via
Copy link
Powered by Social Snap