Dall-E and me

The image below is one of my photographs of Mãe d’Água das Amoreiras (Mother of the Water) Reservoir, Lisbon, taken in August this year. If I was conscientious I would label it with alt-text: “plan view of a large thin metal wheel tap in a horizontal position above a reflective underground cyan pool in a classical building.”

Synthesised similarity

Having registered with the platform, and subscribed, I uploaded this image (without text) into the Dall-E platform at labs.openai.com. After a few seconds the platform gave me the original image plus four variants: images undoubtedly similar to my original but in settings that by my reckoning are unlikely to exist. I selected the fourth one in that row and the platform generated another four images as variants. I repeated the process 3 more times, selecting at each iteration an image that looked as unlike my original as I could imagine.

Generative graphics

So Dall-E here served as part of a generative system. It has an initial condition (my original photograph), rules/algorithms (in this case based on Dall-E’s sophisticate neural network pre-trained on millions of images), selection criteria (in this case my own rule about maximising difference). The end state is undetermined, though in this case I wanted to produce an interesting grid of similar images that seem to deviate and evolve. (Perhaps I could sell an NFT of a hi-res version of the picture as crypto-art!)

These 25 images are part of a vast search space of possible images — a branching decision tree with 4 paths at each node. The images here are just 25 from a space of 1+4+42+43+44+45 = 1,365 images, with millions more after just 10 iterations.

The Google Lens platform also provides impressive and useful image matching capability. I wanted to confirm that Dall-E did not just return similar pictures from the repository of images in Google’s database. Google Lens returned a collection of existing images, each with web links. I was pleased to see that the results were different from the synthetic collection generated by Dall-E.

Text to image

The Dall-E platform also uses text to prime the synthesis of new images. To that end I fed it my alt-text description: “plan view of a large thin metal wheel tap in a horizontal position above a reflective underground cyan pool in a classical building.” Dall-E generated four novel images. They fit my description though, as expected, they don’t lead to a reconstruction of my original.

OpenAI seems to promote Dall-E as a means of generating novel artworks, designs, logos, etc from a text based specification. Here are some of my attempts:

A skyscraper in a desert made of broken glass.

A skyscraper in a desert made of water.

A city made of bitcoins in the metaverse.

Edinburgh Newtown tenements with flying cars in the style of blade runner photorealistic film noir.

I entered some of my book titles
Network Nature: The place of nature in the digital age

Cryptographic city

Mood and Mobility: Navigating the Emotional Spaces of Digital Social Networks

Dall-E also synthesises images primed with arbitrary text. Here are a couple of sentences from previous blog posts:

“Stories that include dialogue encourage engagement. Readers put themselves in the position of the interlocutors, even shifting their sympathies as the conversation progresses.”

“We are in essence half spheres looking for our other halves.”

“The city as organism, living city, sentient city: these concepts invite reflection on the putatively intimate relationship between mind and matter — panpsychism”


The title of Dall-E is an obvious reference to the artist Dali (and Wall-E). I explored the influence of Surrealism in Derrida for Architects. In that book I explained: An umbrella against an umbrella stand would excite little interest, nor would the sight of a patient on an operating table, but place an umbrella on an operating table and you have something else. So the surrealist artist Max Ernst wrote that when the ‘ready-made reality’ of an umbrella is placed with that of a sewing machine on an operating table, the occasion provides the possibility for ‘a new absolute that is true and poetic: the umbrella and the sewing machine will make love’ (Breton, 275). So here is an umbrella and a sewing machine on a surgical operating table.

I added “making love,” but Dall-E returned “It looks like this request may not follow our content policy.”

Amongst the other achievements of its inventors, Dall-E has generated a proliferation of surreal imagery online. Here’s another contribution: A toothbrush eating a banana in a fish tank.

For a helpful introduction to the technology see Ryan O’Connor’s “How DALL-E 2 Actually Works”: https://www.assemblyai.com/blog/how-dall-e-2-actually-works/

Also see posts: Feature detection: Cows, cars and automobiles, Reverse image search, The hallucination machine, and Hallucination everywhere.


  • Breton, André. Manifestoes of surrealism. Ann Arbor, MI: University of Michigan Press, 1969. 
  • Coyne, Richard. Derrida for Architects. Abingdon: Routledge, 2011.
  • dallery.gallery. “DALL-E 2 Prompt Book.” 14 July, 2022. Accessed 30 September 2022. http://dallery.gallery/wp-content/uploads/2022/07/The-DALL·E-2-prompt-book-v1.02.pdf 
  • O’Connor, Ryan. “How DALL-E 2 Actually Works.” AssemblyAI, April 19, 2022. Accessed 27 September 2022. https://www.assemblyai.com/blog/how-dall-e-2-actually-works/
  • Ramesh, Aditya, Mikhail Pavlov, Gabriel Goh, and Scott Gray. “DALL-E: Creating images from text.” OpenAI blog, 5 January, 2021. Accessed 1 October 2022. https://openai.com/blog/dall-e/
  • Ramesh, Aditya, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever. “Zero-Shot Text-to-Image Generation.” arXiv – Cornell University, 26 February, 2021. Accessed 1 October 2022. https://arxiv.org/abs/2102.12092
  • Seth, Anil. Being You: A New Science of Consciousness. London: Faber, 2021.
  • Suzuki, Keisuke, Warrick Roseboom, David J. Schwartzman, and Anil K. Seth. “A Deep-Dream Virtual Reality Platform for Studying Altered Perceptual Phenomenology.” Scientific Reports 7, no. 1 (2017): 15982. 10.1038/s41598-017-16316-2


1 Comment

Leave a Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.