Up till now, I was maily using VQGAN+CLIP and CLIP+Guided Difusion basically. In #8, however, there is the first set of another kind of algorithm. The small thumbnails I used to select a seed image for the main one where generated using a much more powerfull algorithm called Stable Difusion. This is the algorithm that made headlines when DALLE was anounced to the world. In fact, those where created with DALLE-2, the first open trial version of if. It is a very powerful engine and every single one of the images in this page was made using it. There were other engines using the same kind of algorithm when I first checked (for instance CRAION) but their results were often not very artistic so I rarely used then. DALLE-2 was quite literally one step ahead of each one of then (and in some ways still is).
Most of DALLE-2 limitations are actually self-inflicted. It does have a lot of rules about its use, some quite common (no nudity or violence) others quite understandable (no using know people in the prompts, for instance). But some are stitlistic: DALLE-2 is quite open so when you prompt it is gives you four image choices. You can use one image and make something like it and you can remove parts of a image and make the engine repair or finish it (a very, very usefull feature). More recently you can use it to expand an image. People were doing this manually expanding the frame with alpha and getting the images back on the engine before. But now it is a build in feature and DALLE-2 is quite good at it. It is called outpainting and is still a beta feature introduced this month (september, 2022).
I used DALLE2 literally hundreds of times. It is quite good for fast prototiping of thumbnails and also to create seed images for other algorithms. Its edition features are also very, very impressive and are probably something the other engines should try offering. But I will discuss edition later. Every image in this page was produced using only prompting and selection on DALLE-2, as it is quite obvious by the several small mistakes that can be easily fixed with edition. I intentionally did not use the editing features, to which I will return later.
I will first introduce the other engines that also use Stable Difusion. I will begin with the firs one I saw that produced a lot of incredible images, looked more "artistic" and coherent and have not the same restrictions that DALLE-2 have: Midjourney. But I will not dwell on it and will return do DALLE-2 features later.