Guess What I'm Thinking

I fell down the rabbit hole of Artbreeder, a platform which uses a generative adversarial network to seamlessly “breed” images together. The website allows users to (lightly) control the process and document the results. Enough human images have been fed in that it can generate plausible imaginary people, just as well as the GAN at But when applied to landscapes, objects, creatures, and textures, it makes results just as grotesque as the ones from DeepDream back in 2015. Instead of DeepDream’s screaming dog-oriented psychedelia, Artbreeder’s images often hover just out of the reach of plausibility, making things, arrangements, and planes that seem to call out to be realized. 

While the site is currently only allowing uploads of human portraits, it is reasonable to assume that the field will open, and you will be able to mix your own images. Given the pace at which the field grows, I would expect that before long you will have the ability to do the same in 3D, and then 4D (if the server farms don’t give up and explode). This would make the work of the contemporary architecture student rather easier, since such a system could quickly and reliably generate multi-colored booleaned voids.

When I set about Artbreeding I naturally started doing so with landscapes. If you want to do so too, I would advise for the time being steering away from creating in Landscapes, which operates out of a shallow pool of fairly stock photography and painting, and go straight to General, where you can mix “genes” of snakes, clothes, and “medium objects” with your landscapes. The results, being fairly low resolution, are best viewed at small scale, where your mind starts to fill in the blanks to make them plausible – how would this image come to be?

In the process of scrolling through and blending images, you find a characteristic blend of exhiliration and dismay that will need its own word sooner or later, as such platforms proliferate; you see jawdropping things that completely lack the assumed scaffolding of work behind them, be that geological action over millennia or the blood, sweat, and tears in a painting on canvas. The work still exists – in every image uploaded to the system, in the design of the system itself, of the physical work of computation – but it has a tenuous relationship to what is output. 

To put it another way, each individual image seems cheap, on pace with the cheapening of music – you do not make it yourself, or even pay for it upfront, or even wait an appreciable time for it – and so you do not reckon with its individual value. It has been suggested, most recently in connection with WeWork’s algorithmic generation of office space, that algorithmic design will serve to sever the designer from tedious, repetitive, and predetermined work. Who would wish that any further interns put work into detailing parking lots? But then, how much the value of design is secreted away in tedious, repetitive, and predetermined work? If you were to obey the call of reverse-engineering the products of Artbreeder, you would be staking endless time on transcribing a whim, one that might as readily been spit out any other way.

And worst of all, after Artbreeding for a while, you go into public and everything seems Artbred; minutely varied instances of the same templates thrown plausibly here and there. How much effort just to repeat what is fundamentally the same!

(October 2019)