Face recogin­tion, ma­chine learn­ing

As part of my third year at Camberwell I’ll be do­ing re­search into ma­chine learn­ing, com­puter vi­sion etc. These are my work­ing notes.

January 2, 2018

I man­aged to work my way through Hands-On Machine Learning with Scikit-Learn & Tensorflow by Aurélien Géron. Some of the more ad­vanced math is still be­yond me (remembering how vec­tors work was hard enough), but I feel like I’ve now got an ac­tual un­der­stand­ing of some of the acronymns that get thrown around a lot: Deep Learing, Neural Networks, MLP, TensorFlow and so on.

An im­por­tant point that’s made early in the book is that ma­chine learn­ing is­n’t the same thing as neural net­works. Géron quotes Tom Mitchell (1997):

A com­puter pro­gram is said to learn from ex­pe­ri­ence E with re­spect to some task T and some per­for­mance mea­sure P, if its per­for­mance on T, as mea­sured by P, im­proves with ex­pe­ri­ence E.

Basic meth­ods like lin­ear re­gres­sion fall un­der this de­f­i­n­i­tion as well as neural net­works.

Chapter 14, which deals with Recurrent Neural Networks is par­tic­u­larly ex­cit­ing. As Géron points out,

RNNs [are] a class of nets that can pre­dict the fu­ture. […] RNNs abil­ity to an­tic­i­pate also makes them ca­pa­ble of sur­pris­ing cre­ativ­ity.

Evidently this is the tech­nol­ogy be­hind some of these Google Magenta Experiments. A later chap­ter in the book de­scribes how you can train a neural net­work in such a way that given a set of source im­ages, it can gen­er­ate new im­ages that look as real as the in­put im­ages - ex­cit­ing stuff. I’m hop­ing to do this with im­ages of faces - gen­er­at­ing por­traits of peo­ple that don’t ex­ist.

However, I do sus­pect that the lap­top I’m typ­ing this on will have nearly enough pro­cess­ing power to do all of that. Finding enough source im­ages will also be a con­cern. This de­scribes the main prob­lem with ad­vanced ma­chine learn­ing: While the math is well es­tab­lished and rel­a­tively ac­ces­si­ble, ac­cess to the vast amounts of pro­cess­ing power and train­ing data re­quired to build use­ful soft­ware is lim­ited to large or­gan­i­sa­tions.

January 13, 2018

I got my hands on some­thing called the FERET Database. This is a col­lec­tion of im­ages of faces that the U.S mil­i­tary comis­sioned in the mid-nineties, con­tain­ing about 11.000 im­ages of roughly 800 in­di­vid­u­als from dif­fer­ent an­gles, wear­ing dif­fer­ent clothes etc. It’s what much of mod­ern re­search into fa­cial recog­ni­tion al­go­rithms has been based on. Here’s the rel­e­vant gov­ern­ment web­site.

The way you get this data­base is email­ing the US de­part­ment of de­fence. Once you do, they give you lo­gin de­tails to down­load the data­base. It comes in a weird 90s for­mat, so I had to spend some time ex­tract­ing and con­vert­ing the im­ages so I could look at them.

FERET Images Example im­ages from the FERET data­base

I’m not sure what I’m go­ing to do with these im­ages. I could use them to train a neural net­work, but they’re also an in­ter­est­ing ar­ti­fact in them­selves. They’re es­sen­tially a time cap­sule from the cam­pus of George Mason University in the 1990s - 90s hair­cuts etc. I also like the idea that these are im­ages only ever in­tended for ma­chines to look at. Also the fact that these are ba­si­cally sci­en­tific doc­u­ments cre­ated for a gov­ern­ment agency, yet some of them are sur­pris­ingly ar­tisitic.

Evidence Evidence (1977) by Larry Sultan and Mike Mandel. Image Source

It re­minds me of Evidence (1977) by Larry Sultan an Mike Mandel, where they took NASA re­search pho­tographs, took them out of their orig­i­nal con­text and put them in a new or­der that tells a story.

January 15, 2018

Turns out Trevor Paglen did some work on the FERET im­ages very re­cently. The ex­hi­bi­tion also in­cludes ma­chine-gen­er­ated im­ages and some orig­i­nal pho­tog­ra­phy - all very suc­ces­ful. I’ll try and get an ex­hi­bi­tion cat­a­logue.

Paglen has been do­ing this work for a while. Other pro­jects of his in­clude Invisible : covert op­er­a­tions and clas­si­fied land­scapes, a book on re­stricted gov­ern­ment sites. Also Blank Spots on the Map, which is about how gov­ern­ments ma­nip­u­late maps to hide what they’re do­ing.

January 16, 2018

Readings

Tracey sug­gests I go see an ex­hi­bi­tion called Metadata - How we re­late to im­ages at CSM - I’ve sched­uled for Saturday.

I’ve spent some more time with the FERET data­base, go­ing through the im­ages, print­ing some of them and read­ing some of the re­lated gov­ern­ment re­ports:

The 1996 pa­per points out:

Some ques­tions were rasied about the age, racial, and sex­ual dis­tri­b­u­tion of the data­base. However, at this stage of the pro­gram, the key is­sue was al­go­rithm per­for­mance on a data­base of a large num­ber of in­di­vid­u­als.

This might be an area worth ex­plor­ing. The pho­tos were col­lected by GMU, sug­gest­ing that most of the vol­un­teers are prob­a­bly stu­dents and uni­ver­sity staff (not mil­i­tary em­ply­ees as is some­times sug­gested). In some sense the whole his­tory of in­sti­tu­tional re­cism and sex­ism might be baked into this data­base?

Might be good to run some an­a­lyt­ics on gen­der / age / race dis­tri­b­u­tion of the databse.

I’m still in­ter­ested in how ex­actly these pho­tog­ra­phy ses­sions were con­ducted - how did they re­cruit vol­un­teers, whose of­fice was turned into a stu­dio, what did peo­ple at the time say about the pro­gram etc.

January 17, 2018

Readings

Segune sug­gests two ad­di­tional read­ings on pho­to­graphic archives (after see­ing the FERET im­ages):

Installation view of 48 Portraits by Gerhardt Richter Tate Modern

She also points out 48 Portraits (1971-98) by Gerhardt Richter.

Notes on Invisible Images (Your Pictures are Looking at You)”

On a ba­sic level, Paglen ar­gues that ex­ist­ing mod­els of vi­sual cul­ture are be­com­ing less rel­e­vant be­cause the vast ma­jor­ity of im­ages are now cre­ated by ma­chines for other ma­chines. This has to do with the fact that a dig­i­tal im­age is pri­mar­ily ma­chine-read­able. You can only make it vis­i­ble to hu­man eyes for a brief mo­ment us­ing ad­di­tional soft­ware, screens etc.

The sec­ond main point is that im­ages are no longer pri­mar­ily used as rep­re­sen­ta­tions. Instead, ma­chines use im­ages to make pre­dic­tions, ac­ti­vate mech­a­nisms and gen­er­ally ac­tively change the real world. In his words:

Images have be­gun to in­ter­vene in every­day life, their func­tions chang­ing from rep­re­sen­ta­tion and me­di­a­tion, to ac­ti­va­tions, op­er­a­tions, and en­force­ment. Invisible im­ages are ac­tively watch­ing us, pok­ing and prod­ding, guid­ing our move­ments, in­flict­ing pain and in­duc­ing plea­sure. But all of this is hard to see.

Paglen cites a num­ber of ex­am­ples of this that have been in op­er­a­tion for years. These in­cluded cases where li­cense plates are recog­nised and used to track peo­ple’s move­ments and re­tail com­pa­nies that analyse cus­tomers’ fa­cial ex­pres­sions

He makes the point that places like Facebook are closely mod­elled on tra­di­tional no­tions of shar­ing im­ages (using skeu­mor­phic terms like al­bums, slideshows) but this is only true on the sur­face. Underneath, your pho­tos are feed­ing highly de­vel­oped ma­chine learn­ing al­go­rithms de­signed to ex­tract value from your im­ages (now or in the fu­ture). As Paglen points out, you could eas­ily imag­ine the li­cense plate recog­ni­tion case be­ing ex­panded to in­clude im­ages peo­ple share on so­cial me­dia.

He closes by say­ing that the long-term so­lu­tion to this needs to be reg­u­la­tion - hacks” that might be ef­fec­tive against recog­ni­tion al­go­rithms to­day will loose their ef­fec­tive­ness over time.

We no longer look at im­ages - im­ages look at us. They no longer sim­ply rep­re­sent things, but ac­tively in­ter­vene in every­day life. We must be­gin to un­der­stand these changes if we are to chal­lenge the ex­cep­tional forms of power flow­ing through the in­vis­i­ble vi­sual cul­ture that we find our­selves emeshed within.

January 20, 2018

Notes on Segune’s Readings

(She sug­gested these a few days ago)

Archive Fever: Photography be­tween History and the Monument

This cites an es­say called The Body and the Archive (1986) by Allan Sekula, which talks about how pho­to­graphic archives have been used as an in­stru­ment of so­cial con­trol an dif­fer­en­ti­a­tion un­der­writ­ten by du­bi­ous sci­en­tific prin­ci­ples”.

Bertillon Archive The Metropolitan Museum of Art

Sekula talks about Alphonse Bertillon, a French po­lice­man who cre­ated a huge bull­shit sys­tem to clas­sify crim­i­nals based on their pho­tographs of their faces. The Met seems to have a good col­lec­tion of his stuff. The Science Museum has some of the in­stru­ments he used to mea­sure var­i­ous fa­cial fea­tures.

Similar archival pro­jects to clas­sify peo­ple along racial lines (The nazis were big fans).

Their pro­jects, Sekula writes, constitute two methoo­log­i­cal poles of the pos­i­tivist at­tempts to de­fine and reg­u­late so­cial de­viance” The crim­i­nal (for Bertillon) and the racially in­fe­rior (for Galton) ex­ist in the nether­world of the pho­to­graphic archive, and when they do as­sume a promi­nent place in that archive, it is only to dis­so­ci­ate them, to in­sist on and il­lu­mi­nate their dif­fer­ence, their archival apart­ness from nor­mal so­ci­ety

Enwezor goes on to de­scribe a num­ber of ex­am­ples where archives are used as a way to con­serve power, pre­sent ex­ist­ing sys­tems of op­pres­sion as nat­ural etc.

An Archival Impulse

January 24, 2018

MetaData at the Lethaby Gallery

## January 25, 2018

TODO Spoke to se­gune about feret im­ages

January 26, 2018

TODO Jak tu­to­r­ial, dis­cussed ways of pre­sent­ing face im­ages

January 27, 2018

TODO de­cided to print feret im­ages, looks like its ex­penive, need to talk to te­chini­cian, emailed tracey

January 29, 2018

TODO Peer as­s­es­ment

Febuary 14, 2018

Eigenfaces are a way to rep­re­sent im­ages used in fa­cial recog­ni­tion soft­ware. First in­tro­duced by Turk and Pentland (1991). Below is fig­ure 2 from that pa­per:

Eigenfaces Turk, Pentland (1991)

Something in­tru­ig­ing about the aes­thet­ics of re­search pa­pers.

More Eigenfaces OpenCV

Febuary 18, 2018

Another Face Database

The National Institue for Standards and Technology (which pro­vides the FERET Database) also has some­thing called the Multiple Encounter Dataset (MED). This is a data­base con­tain­ing 683 mugshots of de­ceased peo­ple used to de­velop fa­cial recog­ni­tion soft­ware. This is start­ing to get much closer to Berillion. I’m as­sum­ing by us­ing pho­tographs of dead peo­ple al­lows them to get around some pri­vacy con­cerns. They’ve also re­moved (in some cases blacked out) any ref­er­ence to the per­son’s name or rea­son of ar­rest. So what you’re left with is this archive of black and white pho­tographs of peo­ple from the 60s, 70s and 80s (judging by the hair­cuts).

Mugshots National Institute of Standards and Technology

With the im­ages comes a datafile de­scrib­ing the pho­tographs:

Subject Encounter Record DOB WGT SEX HGT RAC HAI EYE PHD IMT POS VLL HLL

Interestingly this con­tains fields for height (ie. 5′11) weight (in lbs.) and date of birth of the de­tainee.

Febuary 27, 2018

Eigenfaces

Some more face data­bases. I’m think­ing the rea­son these are all from the 90s is that re­search does­n’t need this sort of stan­dard­ised data­base any­more - People are now work­ing with im­ages col­lected from the in­ter­net. Labelled Faces in the Wild is an ex­am­ple. This has the ben­e­fit of be­ing much cheaper than tak­ing orig­i­nal pho­tographs - you can cre­ate a data­base that is or­ders of mag­ni­tudes larger for the same amount of money. Examples:

Facebook re­search uses in­ter­nal data­bases with mil­lions of faces. Maybe there’s some­thing to this idea: Back in the day, col­lect­ing a data­base had to be a ded­i­cated ef­fort. Now, we’re all con­tribut­ing to face recog­ni­tion al­go­rithms (and other ma­chine learn­ing ap­pli­ca­tions by way of our be­hav­iour, move­ments, writ­ing) in­vol­un­tar­ily.

AT&T Laboratory Database of Faces AT&T Laboratories Cambridge

[University of Surrey](http://www.ee.surrey.ac.uk/CVSSP/xm2vtsdb/)

March 6, 2018: RNNS

This might be a fun pro­ject to get into gen­er­at­ing things with neural net­works: The New York Times has an API that makes it re­ally easy to get their con­tent pro­gra­mat­i­cally. I pulled every ar­ti­cle head­line from January 2016 to pre­sent - about 4MB of text. This Tensorflow setup makes it triv­ial to train a char­ac­ter-based RNN on the data, and even­tu­ally gen­er­ate new head­lines that (somewhat) match the lan­guage of the New York Times. It’s pretty amaz­ing to see the net­work learn English from scratch in a few hours of train­ing.

The Dutch Polders by Bike and Schooner The Royals Take the Title ‘The Affair’ Season 2 Episode 5: Never Read the Book ‘The Walking Dead’ Season 6, Episode 4 Recap: The Making of Morgan ‘Homeland’ Recap, Season 5, Episode 5: Can Carrie Figure Out What’s Going On With Allison? Long Lines for Story Time The Best Moments in College Football This Week Dangers for the Unwary Q. and A.: Chan Koonchung on Imagining a Non-Communist China Report on Bella Vista Health Center Inside the Trial of Sheldon Silver Jeb Bush Says He Was Unaware of Rubio PowerPoint Deck

This sort of au­to­mated writ­ing is al­ready widely used at main­stram out­lets. The Washington Post seems to be lead­ing the pack.

April 16 Tutorial Notes

Newspaper Clippings

Fake Letterpress Newspaper Clippings

Large Scale Drawing Machine

Continues to be a health and safety night­mare.

Machine Learning Dataset Book

ML Book Spread ML Book Spread