检测对抗样本_对抗T恤以逃避ML人检测器
檢測對抗樣本
Neural Networks are exceptionally good at recognizing objects shown in an image and in many cases, they have shown superhuman levels of accuracy(E.g.-Traffic sign recognition).
神經(jīng)網(wǎng)絡(luò)非常擅長識別圖像中顯示的物體,并且在許多情況下,它們已經(jīng)顯示出超人的準(zhǔn)確度(例如,交通標(biāo)志識別)。
But they are also known to have an interesting property where we can introduce some small changes to the input photo and have the Neural Network wrongly classify it into something completely different. Such attacks are known as adversarial attacks on a Neural Network. One important variant is known as the Fast Gradient sign method, by Ian GoodFellow et al, as seen in the paper Explaining and Harnessing Adversarial Examples. If properly implemented, such methods can add noise to the image barely perceptible to the human eye but it fools the Neural Network classifier. One classic example is shown below.
但是也眾所周知,它們具有有趣的屬性,我們可以在輸入的照片上進(jìn)行一些細(xì)微的更改,并使神經(jīng)網(wǎng)絡(luò)將其錯(cuò)誤地分類為完全不同的東西。 這種攻擊被稱為對神經(jīng)網(wǎng)絡(luò)的對抗攻擊。 一個(gè)重要的變體被Ian GoodFellow等人稱為快速梯度符號法,如論文“ 解釋和利用對抗性示例”中所見。 如果實(shí)施得當(dāng),此類方法可能會(huì)給人眼幾乎看不到的圖像增加噪點(diǎn),但卻使神經(jīng)網(wǎng)絡(luò)分類器蒙昧。 下面顯示了一個(gè)經(jīng)典示例。
Explaining and Harnessing Adversarial Examples解釋和利用對抗示例實(shí)時(shí)物體檢測器-自動(dòng)化監(jiān)控的基礎(chǔ): (Real time Object detectors-The backbone of automated surveillance:)
PixabayPixabayObject detection is a classic problem in Computer Vision where given an image you have to localize an object present in the image as well as classify the category that an object belongs to. Usually this is done by training a neural network on a dataset containing sufficient number of images in each category we want to address and the output being the location of the object as well as the probability of the object belonging to the different categories.Some of the most popular Object detector models are YOLO and Faster RCNN. The detection is real-time and can be embedded in a video feed to detect objects instantaneously.
對象檢測是Computer Vision中的經(jīng)典問題,在給定圖像的情況下,您必須定位圖像中存在的對象以及對對象所屬的類別進(jìn)行分類。 通常,這是通過在數(shù)據(jù)集上訓(xùn)練神經(jīng)網(wǎng)絡(luò)來完成的,該數(shù)據(jù)集包含我們要處理的每個(gè)類別中足夠數(shù)量的圖像,輸出是對象的位置以及對象屬于不同類別的概率。最受歡迎的對象檢測器模型是YOLO和Faster RCNN。 該檢測是實(shí)時(shí)的,可以嵌入視頻源中以即時(shí)檢測對象。
Wikipedia under Creative Commons licenseWikipedia,由Creative Commons授權(quán)欺騙對象檢測器: (Fooling the Object detectors:)
Using the adversarial attacks discussed above researchers have successfully been able to create images or patches that can baffle an object detector. This can be used to evade detection of glass frames, stop signs and images attached to cardboard. But Surveillance systems often have to view a person from a wide variety of angles so creating just a patch is not sufficient for the problem at hand. To do this the authors tracked the deformations of the t-shirts and mapped them onto adversarial images until it was able to fool the classifier. The result was the creation of a checkerboard pattern whose every pattern of deformation in video frame was measured. Then finally the interpolation technique Thin Plate Spline Technique(TPS) was used to replace the checkerboard patch with other images.Finally, a t-shirt was designed by Kaidi Xu and colleagues at Northeastern, MIT-IBM Watson AI Lab, and MIT that was able to baffle variety of object detection methods including YOLO v2 and Faster RCNN which was unsuccessful at detection of people wearing this shirt.
使用上面討論的對抗攻擊,研究人員已經(jīng)成功地能夠創(chuàng)建圖像或補(bǔ)丁,使目標(biāo)檢測器感到困惑。 這可用于逃避對玻璃框的檢測,停止粘貼在硬紙板上的標(biāo)志和圖像。 但是監(jiān)視系統(tǒng)通常必須從多種角度觀察一個(gè)人,因此僅創(chuàng)建補(bǔ)丁不足以解決當(dāng)前的問題。 為此,作者跟蹤了T恤的變形,并將其映射到對抗圖像上,直到它能夠使分類器蒙騙為止。 結(jié)果就是創(chuàng)建了一個(gè)棋盤格圖案,其視頻幀中的每個(gè)變形圖案都得到了測量。 最后,使用插值技術(shù)Thin Plate Spline Technique(TPS)將棋盤格貼圖替換為其他圖像。能夠擋住包括YOLO v2和Faster RCNN在內(nèi)的多種對象檢測方法,這些方法在檢測穿這件襯衫的人方面均不成功。
Image taken from the paper Adversarial T-shirt! Evading Person Detectors in A Physical World圖片取自紙質(zhì)對抗T恤! 逃避物理世界中的人偵探結(jié)果: (Results:)
Finally after printing these kinds of shirts they were put into action in front of surveillance cameras and videos of the result was recorded. It turns out YOLO v2 model was fooled in 57 percent of frames in the physical world and 74 percent in the digital world, a substantial improvement over the previous state of the art’s 18 percent and 24 percent.
最終,在將此類襯衫打印后,將它們投入監(jiān)控?cái)z像機(jī)的前面,并記錄了結(jié)果的視頻。 原來YOLO V2模型是在現(xiàn)實(shí)世界中的幀的57%和在數(shù)字世界74%上當(dāng),比上的顯著改進(jìn)現(xiàn)有技術(shù)的狀態(tài)的18%和24%。
結(jié)論: (Conclusion:)
This paper provides useful information on how adversarial perturbations and noises can be implemented in real life scenarios. Images generated by this method can be used to train the different classifiers and in turn produce more robust classifiers. Such classifiers may prevent object detectors from getting tricked in the future.
本文提供了有關(guān)如何在現(xiàn)實(shí)生活中實(shí)現(xiàn)對抗性擾動(dòng)和噪音的有用信息。 通過這種方法生成的圖像可用于訓(xùn)練不同的分類器,進(jìn)而產(chǎn)生更強(qiáng)大的分類器。 這樣的分類器可以防止對象檢測器將來被欺騙。
翻譯自: https://towardsdatascience.com/adversarial-t-shirts-to-evade-ml-person-detectors-8f4ee7af9331
檢測對抗樣本
總結(jié)
以上是生活随笔為你收集整理的检测对抗样本_对抗T恤以逃避ML人检测器的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 传言说大量喝水嚼口香糖可以逃过酒驾检测这
- 下一篇: ChatGPT“军备竞赛”使英伟达 CE