News & Views

We used AI to design our company holiday card. Here’s how it went.

It’s the first holiday season of the new AI era, so we decided to use AI-assisted design to bring Thinkso’s annual client holiday card to life. Our concept was to reimagine Santa for a more enlightened age and audience. It was out with the old and in with new, so what better tool to use for the graphics than the newest generation of generative AI?

Choosing the right platform

Just like any design tool (and every human designer for that matter), different AI engines and platforms have different strengths, weaknesses, emphases, and biases. We needed one that would fit our brief (and our workflow).

The first candidate was Adobe Firefly because of our familiarity and comfort with the Adobe Creative Suite. But we found it too limited, producing images that looked too much like stock photography. We also tried Dall-E, arguably the most advanced platform from the buzzy OpenAI. It’s great at generating a range of illustration styles, but not so much with the photorealistic images.

Our new vision of Santa needed to have relatable and genuinely emotional facial features. So we settled on Midjourney Bot for its best-in-class ability to generate photorealistic human images. (Midjourney’s costs are also lower.)

The challenge of creating AI images to spec

One of the most fun aspects of AI design is the unexpected delights you can get from simple prompts (and the often dreadful and hilarious ones, too). Midjourney had that in spades.

But we needed it to produce images that aligned with our strategy. And the images needed to be consistent in style and tone. That meant we had to figure out the perfect text prompts to feed the bot.

It helped to remember that AI, for all its power and “imagination,” is really tied to existing images — however many millions. You don’t need to reinvent the wheel. So rather than trying to describe specific characteristics in the prompt, using references from pop culture was much more successful.

For example, using the actor’s name “Golshifteh Farahani” allowed us to home in on the look we wanted for High-Tech/Low-Impact Santa without going down a rabbit hole of skin tones and body types.

A collage of four images of the same person in various holiday-themed settings, surrounded by Christmas trees, lights, and piles of wrapped gifts, expressing cheerful emotions.

Giving the bot a reference from pop culture can be a great shortcut to getting the right look.

As expected, using “Wes Anderson” resulted in pleasantly-colored, symmetrical layouts with subjects that sported quirky expressions.

A person in a red jacket and cap stands in a subway station, holding a yellow gift bag, with subway cars blurred in the background.

Wes Anderson called. He wants his scene back!

Similarly, using the names of specific camera models, film types, and lighting styles became essential shortcuts to getting a look that might take a paragraph-long prompt to describe.

Getting around (and sometimes embracing) the quirks and anomalies

We found out pretty quickly that generating something either overly generic or grotesquely fantastic was easy. Hitting the sweet spot of realistic yet novel was much harder.

The first obstacle with reimagining an iconic figure like Santa was AI’s assumptive leaps.

For example, when we asked for a “white man, 55 years old, white beard, on snowmobile, holding red gifts, delivery” (without any mention of Christmas), the bot dutifully produced hyper-traditional Santa Claus figures!

Four festive scenes with Santa Claus on various vehicles, like a scooter and ATV, surrounded by a Christmas tree and presents. Snow is falling.

AI assumed we wanted a traditional Santa, even when we left “Christmas” out of the prompt.

Consistency was also a challenge. Designing to a goal is an iterative process, but getting Midjourney Bot to tweak something was almost impossible. Asking for a simple modification can completely alter the basic image or lead to bizarre results.

The image begins with an individual in a vibrant urban night setting wearing red festive clothing riding a scooter carrying packages. It morphs into an older, bearded person wearing heavy red robes in a similar pose in the same setting.

After already modifying the background on the left image, we prompted Midjourney Bot to “make the face older and more realistic.” Nuance is not the bot’s strong suit.

We also ran up against AI’s weird gaps in understanding. For all of its sophistication and all-seeing-ness, it gets a lot of simple things wrong — something anyone who has played around with it knows. (For example, AI can produce amazing images of chess pieces, but cannot accurately depict a chess game. The pieces won’t be aligned on the squares on the board, there may be only one color for all the pieces, or there may be three queens.)

For our project, Midjourney Bot had difficulty combining “real” objects with imagined ones — those with little or no readily available images to use as a source. Asking for “robot reindeers” didn’t quite work out.

A collage of four images of the same person wearing a spacesuit in a snowy forest with various depictions of a robotic reindeer.

You’d think a robot would have a better idea of what a robot reindeer might look like.

BUT part of the fun, and part of the reason we chose to use AI for this project, was to capture that Uncanny Valley feeling — that eerie, can’t-look- away sensation that something isn’t quite right but is trying to be. We leaned (carefully) into this because we wanted the images to feel AI-generated. In fact, we discarded images that looked too real — and that looked like obvious mistakes.

An AI generated image of two people in an urban street setting posed with bicycles. The image has strange, out of place artifacts.

Can you spot what isn’t quite right?

Confronting biases and stereotypes

The platform’s gender and racial biases were a more serious problem, especially because we were trying to address these kinds of biases in our diverse and inclusive reimagination of Santa.

Part of the danger of generative AI is that it uses image data that is rife with stereotypes. This was reinforced in our own searches and prompt results.

  • Age: Using the prompt “50 years old” produced images of both men and women with full heads of white hair. When prompted with “Middle-aged,” “40 years old,” or “35 years old,” men were depicted accurately, but women looked significantly younger. It was as if the ages between 35 and 50 didn’t actually exist — the bot went from young adult to senior citizen.
  • Skin color: When prompted with “Black men” or “Black women,” images completely lacked a range of skin tones. Even when “light-skinned Black” was added to the prompt, people in the images were depicted with the same dark color.
  • Body type: Often, body types were automatically assigned a stereotype. For example, without any prompts related to body type, all the Polynesian men were depicted top-heavy, bulky, and masculine. And all the images of women were automatically depicted with Barbie-type bodies unless directed otherwise with specific prompts.

Knowing when to say when and step in

The big question: How much work outside of the AI platform would it take to deliver the final project?

Going in, we didn’t really expect to get finished images from AI. We assumed that at some point, we’d have to stop the exploration and do the final details ourselves in Photoshop.

We learned that time spent refining the prompts and getting the overall image as close as possible exclusively with AI was valuable. It allowed us to leverage what AI is good at — its vast capacity, speed, turn-on-dime flexibility, and oddball flair. From there, we acted as editors for the most part.

But to create complex images with layered content, it was much more efficient to generate the main figure and the environmental details separately in AI, and then combine them into one image ourselves.

Midjourney Bot limits the quantity of references pulled from the prompt, so we couldn’t have gotten both our Hygge Santa scene and the elves. Adding them in also gave us more control around their scale and positioning.

A cheerful person in a red hoodie holds a white mug, standing beside a model of a reindeer. The image morphs as two miniature elves appear in the scene.

Creating small details with new prompts and adding them to the overall image was easier and more efficient.

In the same vein, after generating a different top for the High-Tech/Low-Impact Santa, we also added magical pine orbs and solar panel epaulettes to promote the environmental angle — and a chest logo for fun.

A person wearing a red vest and white jacket stands smiling in front of a large stack wrapped presents. The image morphs as a floating glass orb appears and the figure’s skin tone darkens.

We augmented the AI images with small but important touches to underscore our concept. This included adjusting the skin tone of High Tech/Low Impact Santa to a more realistic value to compensate for the limited range generated by Midjourney Bot.

We also stepped in when a group of images needed to be consistent with each other (consistency being one of the hardest things to control with AI).

For Common Claus, we generated each character separately, with various prompts ranging in age, race, scenery, body type, and more. But to represent them as a cohesive group, we adjusted the reds and added a (custom-designed, eh-hem) logo to each character. They’re interesting images individually, but tweaking just a few elements helped them hang together as a team.

A collage of six different people wearing red uniforms with a white circular logo, smiling and posing individually.

Creating consistency across images with AI is hard. We tweaked colors and added unifying elements to make a set of individual images “on brand.”

The bottom line

We kind of knew what we were getting into using AI end to end (almost) for this project, but we also learned some great lessons for using AI in our workflow along the way. Our advice:

  • Start with a strong strategy and concept. Doing so really kept us on track. AI can take you to crazy places fast. We stayed away from rabbit holes and endless iterations because we had a clear goal.
  • Within the confines of the brief, stay open to the quirks and personality of the bot. This helped us leverage its strengths, rather than fight against its weaknesses. It’s a bit more like a colleague than a tool, and it helps to think about it that way. There’s room to let it do the heavy lifting, respect its choices, and go with its unique POV — just like any valued design partner.
  • Set realistic time and efficiency expectations. As far as generating photorealistic human images goes, AI is a timesaver — once you have mastered the tools. But that, of course, takes a lot of time.
  • Always count the fingers! AI is great at sneaking in an extra digit or two, the little devil.