Throughout 2022 and especially this past month, I've been experimenting with AI art, and while it seems like the word hasn't caught on yet, I'm a fan of this new term synthography to describe it / the process for a couple of reasons.
Synthography appears to be a word created by Elke Reinhuber, an artist, researcher, and educator, as a neologism in analogy to photography. While AI generated images are pictures (jpgs, pngs, etc) just like all other pictures, the process of making them is very different. As opposed to photos, developed with film or digitally by exposing a sensor to light to capture the image, synthos are generated *completely digitally, starting from noise, "filling in" an image. The process is "synthetic" (from the perspective that digital things are artificial, which I have some issue with, but is not the main topic here and could be separately discussed at another place / point in time).
I started using DALL-E mini (now CraiyonAI) earlier in 2022, but became transfixed by DALL-E 2 when I was granted access to it / off the wait list in August. It's amazing what the potentials of this new art form / media are. There are so many possibilities and potentials people have already thought of and published online, and so much more to do as well.
For instance, with img2img functionality a given picture or piece of art can become the "seed" for an entire series. One painting now can start an entire aesthetic, materialized (not just imagined). Another use is to "overlap frames" from a source picture to "outpaint" or "uncrop" an image. People have been sharing their work / findings / discoveries / creations online to all sorts of pictures, including famous paintings from art history. I myself have also done a bit of experimenting with running memes through DALL-E 2 and the results have been honestly amazing (maybe / only, but particularly, if you like surreal memes 😂)
"Synthography" though can also have a double meaning, besides just being a "computer generated image". As I've already said, technologically, the images themselves are created in this a new / unique / "synthetic" way. But not only that, what is the experience like for the human on the other side of it? I can only speak for myself about this (though if you are experimenting with AI art too, please contact me and let's talk about it and be friends), but the "artmaking practice" of doing this is unlike any other medium I am aware of-- a synthetic engagement with the "medium" of the AI. "Prompt engineering" is the practice of crafting a statement or prompt for the AI to run off of, however many iterations you want or can handle.
Set up the prompt, let it run for a bit making images at low resolution, tweak it, modify word order / placement, add additional synonymns, call out specific negations. It's like an alchemical process of using consituent elements (the words) to create images. Using language as a kind of "recipe", like instructions, for the AI to attempt generations of your intentions (and, unless the process crashes, you'll at least get back something 😂). It is an amazing feeling to use words / text / language towards a visually generative purpose. I'll continue trying to feel and describe exactly what it's like. Maybe some other people already know. But hopefully I'm conveying the absolutely amazing artform we're just on the verge of having in the world / society.
There are no "rules" other than what's possible. It's hard to say what is really essential or not-- could any of this have happened without Stack Overflow, much less Google? (also see the asterisk note). And you don't need Photoshop, but it definitely helps. You don't even need a computer really, you can do all this from your phone. Its proof is its existence.
So yeah, I just wanted to write out some thoughts on the topic, as of early September 2022. It feels like human creativity has just become a little more unleashed (or maybe a lot). The ability for ideas and imagination to be greater actualized to some degree (in the visual realm at least, which, while not everything, is an important one) just took a major step.
* * * * * * *
*There is additional nuance here to be elaborated at a future date on this point, because for the AI image generators to work, they must be trained on real data. That data can be existing photography and/or digital art that people have already done, or other image sources. There are many implications of this, such as what's included or left out of the dataset, how detailed / specific the captions or tagging is, what kind or quality the material is, etc.