To state the obvious, AI-powered text-to-image engines like Midjourney, DALL-E, and Stable Diffusion have taken the world by storm in the last few months. Combine the shock and awe of these developments with more accessible GPT-3–powered tools, and you have the makings of the biggest shift in how we make and perceive art since the birth of modernism. 

But first, we must clarify some terms. Much of the internet refers to AI art as “AI generative art,” which causes quite a muddle when artists and programmers have made “generative art” with (and without!) computers since the 1960s. The savvy collector would also do well to understand the difference between the types of AI art, whether created with a diffusion model or GAN or otherwise. 

This is the first of several posts that will describe the landscape of generative art and the processes used therein. In a follow-up, we’ll talk to artists about the processes they use, including training their own AI models, switching between text-to-image generators, and using post-production techniques such as glitching and compositing. 


What’s the difference between algorithmic generative art and AI generative art?

Algorithmic generative art refers to art that is created using a set of predefined rules or algorithms. These algorithms are often mathematical or logical in nature, and they are used to generate visual patterns, images, or animations. The artist designs the algorithm and sets the parameters, but the final output is determined by the algorithm itself.

AI-generated art, on the other hand, is created using machine-learning techniques, specifically neural networks. This art is created by training a neural network on a large dataset of images and then using the trained network to generate new images. The artist may have some control over the parameters of the neural network and the training data, but the final output is determined by the network itself.

Even more succinct:

  • Algorithmic generative art is created using predefined rules or algorithms. 
  • AI-generated art is created using neural networks and machine learning techniques.

Algorithmically Generated Art (FKA “Generative Art”)

With the entry of AI generative art (not our term), it seems that what once used to be the only generative art game in town must now be described in much more specific terms. (This has been confusing to me too.) 

Within the practice of algorithmic generative art, the algorithms used to generate art can range from simple mathematical formulas to more complex, self-generating programs. The artist typically sets parameters for the algorithm, such as color schemes or geometric shapes, but the final outcome is determined by the algorithm itself.

One example is using fractals, geometric shapes that can be split into smaller versions of themselves, which the artist can manipulate to create unique, complex images.


Ringers #387 by Dmitri Cherniak

Another example is creating generative art using cellular automation, which uses relatively simple algorithms to trigger unpredictable and complex emergent outputs. Consider the simplicity of a single ant compared to the emergent complexity and intelligence of a large ant colony. 

Generative artists can create intricate, lifelike patterns on a computer screen by programming a large number of entities to follow basic rules and interact with each other. They are not simply recreating images from our world, but rather, they are designing the rules for their own unique worlds and observing the emergent behaviors that result.

Some of today’s most popular generative artists include Jared S Tarbell, Tyler Hobbs, and Manolo Gamboa Naon and some of the most successful collections include Hobbs’ Fidenzas, Autoglyphs from the team behind CryptoPunks, Ringers by Dmitri Cherniak, and Chromie Squiggle by SnoFro, though this is a deeply abbreviated and insufficient list. 


Fidenza #313 by Tyler Hobbs

What about Generative PFPs?

Generative PFPs like Bored Apes, Crypto Punks, Pixellated Heroes, or World of Women often involve much more manual design work than the above-described algorithmic-style art. 

These NFT collections are often generated by stacking PNG images, including backgrounds, base characters, and individual traits and variations that are all of the same dimensions and relative position within the final graphic. If you were to create a 10k collection of Sunny Day Sloths who all need sunglasses. The artist would create color, shape, and size variations of sunglasses, and they will be generated with different ranges in rarity relative to each other. 

Each layer has a transparent background, so they stack cleanly onto the image, and a simple rules-based algorithm creates the random combinations. 


Pixelated Hero #2564


AI-Generative Art

In recent years, there have been significant developments in the field of AI-generated art, and the last six months have seen those developments blossom into culture-shifting leaps in capability and accessibility. 

There are four primary models for image generation: Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs), Flow-based models, and Diffusion models. For the purposes of this article, we’ll focus on the two most common: GANs and Diffusion. 

Both GANs and Diffusion models generate images from noise. 

GANs consist of a generator neural network that starts with the noise of an informative conditioning variable — such as a class label or text encoding — and attempts to generate a realistic image. The discriminator network (i.e., the adversary) evaluates the generator’s work and labels the image as either real (from the training set) or fake (synthesized by the generator). If deemed fake, the discriminator tells the generator to keep trying. 


Pindar van Arman with his robotic painter

Diffusion models generate images by adding noise to an input image in a sequential manner in a process called forward diffusion. The process is then reversed using a neural network in a process called backward diffusion. The neural network architecture must be chosen such that it preserves the data dimensionality.

Commercial text-to-image generators like DALL-E, Midjourney, and Stable Diffusion rely on the diffusion model, so it feels safe to say that the vast majority of AI-generated art is created on a diffusion model. 

Artists using GANs, such as Pindar van Arman and Gene Kogan (as well as the other artists who have released their work on the BrainDrops platform), typically have technical backgrounds. MORE

Artists using diffusion models are too innumerable to count, but some of our favorites include Empress Trash, Gala Mirissa, jrdsctt, Maximilian, Jenni Pasanen, AICAN, and GreyMask


“Mask Obscura – Spring” by Jenni Pasanen

The barriers to entry are crumbling for those wanting to create work and exhibit their artistic sensibilities and visions. We are only in the early stages of a massive shift in culture, and, at this point, there’s no telling what the future will look like.


For updates on all of our editorial features, subscribe to our newsletter below. 👇