Surfaces & Strategies - Week 4

For our forum activity this week we were asked to find an example of a photographic image not made by a human, then share and discuss that image with our peers.

I began with two images. The first was taken as a Monkey ‘selfie’ (Naruto) in 2011, I wasn’t alone amongst my peers in picking this image, it’s quite famous. Shot by the monkey, a rare crested macaque resident of the Tangkoko reserve on the insland of Sulawesi (Indonesia) using the photographer David Slater’s equipment. The use of this image by Slater resulted in a famous copyright case where an animal rights group, ‘Peta’ sued on ‘behalf’ of the monkey. The case was held in the US and resulted in a judgement that the monkey was ineligible to hold copyright over the image. This could also be described as an artistic collaboration, maybe?

Naruto by Naruto , 2011 (David J Slater/Caters News Agency

Naruto by Naruto, 2011 (David J Slater/Caters News Agency

The second, below, is a DCGAN (Deep Convolutional Generative Adversarial Network)  algorithmically generated digital image of a number of cats. Whilst low resolution, it’s  based on using elements  of over 10,000 photographic images of cats (probably not too difficult to collate in this Instagram age. Whilst low resolution they are very authentic looking constructs.

Xudong Mao, 'LSGAN generating cats in 128×128'

Xudong Mao, 'LSGAN generating cats in 128×128'

This led me to consider whether digitally created images should be considered as equivalent to digital photographs, at least in the expected near-future given the ever improving application of computerised digital imagery. I did note a 2017 reference that illustrates work is now proceeding on doing the same with human faces.

Early days, but here's a recent recently produced gif file of a number of DCGAN images of created human headsots:

Deep Convolutional Generative Adversarial Network with the aid of Felix Mohr, Data Scientist , 2017

Deep Convolutional Generative Adversarial Network with the aid of Felix Mohr, Data Scientist, 2017

A search on Google Scholar on “computer generated random images” led me to recent work at the University of Wyoming led by Professor Jeff Clune. Clune’s team used computer generated imagery to assess the capabilities of a cutting edge neural network Artificial Intelligence (AI) programme designed to automatically recognise photographic images. The AI programme appears to have been easily misled. Below are the highest scoring identifications:

Image of the 'top 40' computer generated random images falsely recognised by a visual analysis AI programme (Professor Jeff Clune's team, University of Wyoming, USA) 

Image of the 'top 40' computer generated random images falsely recognised by a visual analysis AI programme (Professor Jeff Clune's team, University of Wyoming, USA) 

Whilst the reason for the mis-analysis of these images by the AI (Artifical Intelligence) algorithm was in many cases far from obvious, this was getting even further away from this week's forum question. So I moved on..

The image below was produced on the www.random-art.org (Links to an external site.) site using an algorithm written by Andrej Bauer. It composes an image ‘seeded’ by the set of words that the user types in. Although said to be truly random, if you put the same seed in a second time you get exactly the same result.

Image 'randomly' created by the 'strategies of freedom forum' text seed submitted by the author

Image 'randomly' created by the 'strategies of freedom forum' text seed submitted by the author

In the above case the set of words was: ‘Strategies of Freedom Forum’. I was surprised how pleasant the abstract image was, with a degree of smudged effect generated without human intervention, apart for the word-seed provided. Except these weren’t the first words I put in.

I originally tried ‘FalmouthUniversity’. Not too bad a result, but the seed a bit distant in more ways than one. So I  tried Institute of Photography, with and without blank spaces between the words. Not so interesting. ‘Strategies of Freedom’ was tried next. I liked the word-seed, but not the result. So that’s how  ‘Strategies of Freedom Forum’ came to be applied. There was indeed human intervention.  

And I think that’s the issue with any imagery that we see publicised as being by a non-human source. The image itself may well be authentically produced by a non-human, but a very human selection/exclusion filter has almost certainly been applied along the way.

It may have been produced without human intervention, but I suspect there are many less  interesting images that were put forward for others to see.

We are selective on what we find interesting. The uninteresting or plain tends not to get a look at, even though these may well be the vast majority of images produced. Equally the meaning in our photographs, like the false positives Clune's Neuronal Network identified with ‘99%’ certainty, may not be that intended, The viewer may identify a narrative or visual impact that we did not intend or even perceive. Photographic image value is very much in the eye of the beholder, the creator dos not have to be sentient?.

So does the image creator have to be human? It could be a collaboration after all.

 

This week we were also asked to reconsider our relationship with our 'preferred apparatus' by not using it, given 24 hours to produce a mini-series of five images.

I interpreted this that my usual camera gear was not allowed.

Full-Frame kit (Nikon) - NO

Full-Frame kit (Nikon) - NO

Half-frame Mirrorless kit (Fuji) - NO

Half-frame Mirrorless kit (Fuji) - NO

Medium format film kit - NO

Medium format film kit - NO

I’ve never used a mobile phone camera in shoots, so that is what I used. It is a new phone that I’ve never used the camera of. (My old one was stolen a few days ago).

 

One of my Croydon Shopkeeper collaborators premises

A potential collaborator

A potential collaborator

A potential collaborator's doorway in a new sub-project

A potential collaborator's doorway in a new sub-project

The images illustrate the entrance of various current prospect shoots for sub-projects, all involve difficulty arranging a shoot for a variety of reasons.

Not particularly happy with my use of the new apparatus, greater experience would have helped, plus less heat from a very hot summer's day. 

 

References

1) ‘Osborne S, 2017 Monkey selfie' case: Photographer wins two year legal fight against Peta over the image copyright. The Independent digital issue, 12th September, 10:30. See https://www.independent.co.uk/news/world/americas/monkey-selfie-david-slater-photographer-peta-copyright-image-camera-wildlife-personalities-macaques-a7941806.html (Links to an external site.) (accessed 23rd June, 2018)

Jilicoeur-Martineau A (undated) ‘Meow Generator’, personal website. See https://ajolicoeur.wordpress.com/cats/ (accessed 23rd June, 2018)

3) Mohr F 2017 ‘Deep Convolutional Generative Adversarial Networks’, From ‘Towards Data Science’ web site. See https://towardsdatascience.com/implementing-a-generative-adversarial-network-gan-dcgan-to-draw-human-faces-8291616904a (accessed 23rd June, 2018)

4) Vanhemert K, 2015 Simple Pictures that state-of-the-art AI still can't recognize Wired 1st May 6.30am, New York, Condé Nast Digital See

https://www.wired.com/2015/01/simple-pictures-state-art-ai-still-cant-recognize/ (Links to an external site.) (accessed 23rd June, 2018)

5) Nyugen A, Yosinki J and Clune J. 2015  Deep Neural Networks are Easily Fooled:
High Confidence Predictions for Unrecognizable Images
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston. pdf copy at: https://www.cv-foundation.org/openaccess/content_cvpr_2015/app/1A_047.pdf (Links to an external site.) (accessed 23rd June, 2018).

Bibliographic details at https://ieeexplore.ieee.org/document/7298640/ (Links to an external site.)(accessed 23rd June, 2018)

6) Bauer A, 2018, Make your own random picture Random Art website, see 

http://www.random-art.org/online/ (Accessed 23rd June, 2018)