Posts Tagged ‘deep learning’

Against Facial Recognition

Sunday, August 16th, 2020

I’m not sure if you need this. But for some people in some countries it could be very important, assuming that it works. I’ve always been very open on-line, posting only under my real name and everything I post is public. I’ve been careful though only to post things that I don’t mind everyone knowing about me.

As a journalist I’ve had some advice and training on privacy issues, particularly on messaging and e-mail, but haven’t ever felt I was in a situation where I needed to put this into practice. But I do sometimes worry a little about my pictures on line and how these might be used to build up profiles of some of those present by legal or illegal groups, including the police who are already making use of facial recognition in various city environments.

There have been various attempts to block facial recognition, both through the courts and through various subterfuges, including the use of masks and special makeup. Covid-19 has surely added to the problems faced by Dynamic Neural Networks in recognising individuals and whereas wearing a mask was often a criminal offence now you may be fined for not doing so.

What is new about Fawkes (it gets its name from the ‘Anonymous’ mask) developed by a team of students at the SAND Lab at University of Chicago is that it is the first tool to enable us to “protect ourselves against unauthorized third parties building facial recognition models that recognize us wherever we may go” that “gives individuals the ability to limit how their own images can be used to track them”, able to defeat the tools used by systems such as https://www.vice.com/en_us/article/5dmkyq/heres-the-file-clearview-ai-has-been-keeping-on-me-and-probably-on-you-too clearview.ai using deep learning to identify individuals.

The team explain how Fawkes works (and for the technical there is a publication and source code available on the site)

At a high level, Fawkes takes your personal images and makes tiny, pixel-level changes that are invisible to the human eye, in a process we call image cloaking.

They go on to state that “if and when someone tries to use these photos to build a facial recognition model, “cloaked” images will teach the model an highly distorted version of what makes you look like you.”

Original
Cloaked

I’ve downloaded the software (a small file available for Mac and PC) and run it on a picture or two. It was rather slow – but my first files were large. I tried it again on a couple of 600×400 pixel images to post here, and it took around 100s to convert the pair.

The differences are real but pretty subtle – easier to see if you right click to download the files then view them one after the other in your image viewer. The change between the two in each pair then gives me a slightly weird feeling

But these were both images of a single person and I thought I’d try it on something rather more complex but the same size. Although it said it would take about 1 minute, 5 minutes later I was still waiting, and waiting…. I went away and did something else and I think it took around 7-8 minutes. There were small differences to most of the larger faces in the image but many appeared completely unchanged.

Original
Cloaked

The input files were all jpegs, but the output files are png, and have roughly five times the file size in bytes. They had also lost their various keywords and presumably other metadata. The files went back to a similar size to the originals when saved from Photoshop as jpg at an appropriate quality level, and it is these I’ve used here. Saving as jpg perhaps very slightly diminishes the differences.

I have of course no way of knowing whether the ‘cloaked’ files would – as the inventors say their trials show – provide at or near 100% protection “against state of the art facial recognition models from Microsoft Azure, Amazon Rekognition, and Face++”, but can only accept their assurances – and presumably their paper gives more details on their testing.

Fawkes is at the moment more a demonstration of concept rather than usable software, and you would have to be very concerned about your on-line privacy to treat pictures with it. But it does show that there are technical ways to fight back against the increasing abuse of personal data and its commercial exploitation by corporations.

Recently we’ve seen complaints being made by protesters about photographers putting their pictures online, with some arguing that their permission is needed or that they should be pixellated. While photographers rightly argue their right to photograph and publish public behaviour as a matter of freedom of speech – and the idea of claiming privacy seems to negate the whole idea of protest, I can see no objection to minor alterations in images which retain the essential image while frustrating AI-assisted data acquisition. It would I think be rather nice if Adobe could incorporate similar technology as an optional ‘privacy mode’.

Images used above are from My London Diary No War With Iran protest on 4th Jan 2020 opposite Downing St.


All photographs on this and my other sites, unless otherwise stated, are taken by and copyright of Peter Marshall, and are available for reproduction or can be bought as prints.