Over the last couple of months, the art world has been turned on its head yet again, but this time not by blockchain technology. Instead, it’s artificial intelligence. You may have heard of DALL-E, Midjourney, or Stable Diffusion (the open-sourced counterpart to the former two). Image generation is just the most basic of beginnings, and the open-source movement is making the biggest splash.
What follows is a relatively short survey of the many ways that creators the world over are using AI image generation — specifically Stable Diffusion, for the biggest leaps forward — to revolutionize different areas within art, cinema, animation, product design, and more.
We’re going to start — roughly speaking — at the lower end of the MIND COMPLETELY BLOWN SPECTRUM (MCBS) and work our way up to the crazier use cases. (The MCBS score is entirely subjective and totally made up.) Let’s take a look.
Shout out to Daniel Eckler, who compiled many of the below examples into truly amazing Twitter threads.
Gerard Serra (@gerard_sgs on Twitter) created a tool in Fermat that creates detailed text prompt variations from an initial text prompt in order to jog even more creativity from every image generation. See the demo here.
Digital artist Matt DesLauriers uses Stable Diffusion to extract color palettes from AI-generated imagery.
Using Stable Diffusion for Seamless Patterns & Textures
Replicate uses an open-source model from Monaverse to create seamless tiling images
And, thanks to Carson Katri, it’s possible to use Stable Diffusion right inside of the Blender UI to generate cinematic textures for use in animations and more.
@imkairu on Twitter posted his concepts for new Fortnite characters that were made using Stable Diffusion.
Similarly, Matt Reed used an initial image of a deeply pixelated Mario and the prompt below to create realistic character designs for Mario in different styles.
“hyperrealistic photo of a plumber run jumping wearing denim overalls and a red shirt, realistic proportions, highly detailed, smooth, sharp focus, 8k, ray tracing, digital painting, concept art illustration, by artgerm, trending on artstation, nikon d850”
And he got this:
Combined with Unreal Engine, @CoffeeVectors was able to realistically animate their Stable Diffusion-generated virtual human.
DreamStudio (powered by Stable Diffusion) has created some pretty amazing photography using text-to-image technology. Here are a few:
Filmmaker Paul Trillo combined DALL-E-generated images with stop motion animation principles to create this video of 150 car design variations.
@abubu_newnanka used Stabe Diffusion to create a chilling visual story called “Missing in the Forest” that comprises sequential images to create a horror-movie feeling story that could function as a fantastic concept design for a film. Click here to see the full tale.
AI Text-to-Image and Product & Fashion Design
Antonio Cao built a Figma plug-in to integrate Stable Diffusion into his product design process. Watch this video demonstrating how Cao collaborates with AI to design sneakers.
Direction Karen X. Cheng used DALL-E in conjunction with EbSynth and DAIN to create a video showing off different AI-generated fashion concepts.
Going even further, TikTok user @ai_arty_gen created this entire runway collection.
Russ Maschmeyer debuted a concept video demonstrating how AI might, in the future, be used to create totally customized shopping experiences.
And then, he demonstrated how it might be used for home goods as well.
Augmented reality and AI Art
VR designer Ben Desai created this video to demonstrate how artists and creators can turn to AI for nearly every part of the creative process to make things easier and faster to execute.
Artist Peter Piotrowicz posted this video on TikTok, showing off how completely he was able to make use of AI-generated imagery to create a full-blown augmented reality experience beyond anything I’ve seen before.
Using the Deforum plug-in with Stable Diffusion, creators have been able to easily make videos using nothing but generated work from Stable Diffusion.
Using Disco Diffusion, DoodleChaos created this music video for the song “Canvas” by Resonate.
Glenn Marshall made this literary short film built around excerpts from James Joyce’s novel “Portrait of the Artist as a Young Man.”
Coming soon, RunwayML will make filmmaking even easier. (If anyone from RunwayML reads this, I’m ready for beta access 🙏.)
In the meantime, Scott Lighthiser seems to have already mastered quite a bit of AI art magic. Just check out this stunning video below for MAXIM🤯M 🤯 MIND BL🤯W!
It’s crazy to think that we’ve barely scratched the surface of what’s possible with a months-old artform, but we had to leave a lot of cool stuff out of this blog post.
Here at MakersPlace, we’re keeping a close eye on all of these developments and working with cutting-edge creators to bring their experiments to collectors the world over. Check in regularly to see new artists of all styles and mediums, and sign up for our newsletter to keep up with the exciting stuff we have planning for the months ahead — including a special AI art exhibit that we’re not quite ready to announce.
If you’d like to make suggestions or discuss this article, please contact firstname.lastname@example.org