Please enjoy this advent calendar, generated and illustrated with the help of three machine learning models (GPT-3, DALL-E, and Midjourney)
- Eggnog as far as the eye could see
- The fantastical lion of Mor-Bollox
- Saturated Red Turkeys.
- Blue reindeer (they’re bouncing)
- Candy Cane Palm Trees
- A smiling present.
- Blitzen, bowling
- Undy, Undy, Underscore.
- Cheetah print stegosaurus
- Whistling Christmas Trees
- Giraffes holly.
- Underwater pine tree
- Dire reindeer, before the meltdown
- Lightning Chestnuts
- Bouncing top hat beetle
- Laser-eyed deer
- Snakes in Hats
- Gecko Poinsettias
- Melted Ornaments
- Christmas on mars
- Tinsel Parrots
- Christmas Caroling snails
- Astro gingerbears in space suits
- A disgruntled christmas tree brandishing a plasma cannon
How it was done:
I used GPT-3 to generate the door descriptions of an advent calendar by having it complete the following story:
It is the year 20022 and the robots have carried on certain human traditions. Many of them come and go, but every year without fail there is an advent calendar, originated by a glitchy robot santa in collaboration with a motorized christmas tree. The robots have researched the calendar and determined that the Christmas scenes behind each door are highly atypical of the human tradition, but everyone enjoys them anyways.
The Christmas advent calendar for the year 20022:
Brief door descriptions:
I used the original DaVinci variant of GPT-3, which tends to be better at generating weird text than the newer variants (which try too hard to make it look like text they’ve seen before). As usual, I kept maybe 1 in 10 door descriptions it generated, with the others being usually either too strange to illustrate, or close copies of doors it had already generated.
For the illustrations, I used two different text-to-image models, DALL-E2 and Midjourney, asking them both for something like “The minty breath of the Starfish, 16-bit pixel art game cut scene” in an attempt to get them to generate an image that would look good while tiny.
Unlike Pixray, which I used to generate images for the 2021 advent calendar, DALL-E2 and Midjourney aren’t actually working with just a few pixels. Instead, they’re faking it. Look at the image below, which DALL-E2 generated for the door description “The minty breath of the Starfish”. It seems at first glance to be pixel art, but the pixels vary in alignment, size, and aspect ratio. Some pixels even have curved sides. It knows approximately what pixel art looks like but not WHY pixel art looks that way. I find the concept of fake pixel art pretty funny.
Midjourney takes even more license with the concept of pixel art. It has a limited color palette and some lines with pixelated edges. But are those smooth gradients in the sky and water? And how big are its “pixels” anyways? The midjourney model is optimized to make things look pretty above all, including above fidelity to my weird prompts.
The “minty breath of the Starfish” door is one of the ones I rejected as not likely to work well when illustrated. Sure enough, both models just slapped a starfish in an ocean scene and called it a day.