BrXnd Dispatch vol. 002
On understanding, hot sauce, and reverse-engineered prompts
Hi everyone. Welcome to the second BrXnd Dispatch. If you’re wondering what this is or how you got signed up, a roughly-weekly dispatch of interesting stuff at the intersection of brands, AI, and creativity. You likely opted-in on BrXnd.ai. If that’s not interesting to you, feel free to unsubscribe.
First, a few housekeeping items:
I’m spinning up a Discord to build a community of likeminded folks interested in this intersection between brands, AI, and creativity. Please join us on Discord if that’s of interest and share interesting stuff.
Thanks to the many folks who were in touch this week, both about the conference and with links and other interesting tidbits. Please keep those coming. As soon as I have more info on the conference, I will share it, but it’s coming along and still planned for Spring 2023 in NYC. If you are interested in speaking or sponsoring, please be in touch.
Okay, onto what’s on my mind.
Here’s what Sam Altman, CEO of OpenAI, had to say about why DALL-E 2 found so many fans:
It crossed a threshold where it could produce photorealistic images. But even with non-photorealistic images, it seems to really understand concepts well enough to combine things in new ways, which feels like intelligence. That didn’t happen with DALL-E 1.
This resonates with my own experience. Seeing DALL-E 2 interpret the attributes of brands made me look back at GPT3 and dig in more. It was clear that these large language models have an understanding of the world that sometimes goes beyond our own. That’s not hugely surprising, as they’re working off massive datasets and set up to spot patterns that can escape the human eye/brain. But that doesn’t make it any less impressive.
One of the things I’ve been spending some time with lately is seeing what kind of data I can extract that articulates that understanding. It’s still early days, but the results are fascinating. Can you guess what brand this is?
Colors: The primary colors used in branding are bright red and yellow.
Patterns: The logo features a bright red, yellow, and black pepper icon, which is repeated in a repeating pattern throughout the brand's packaging and promotional materials.
Iconography: The pepper is used to represent the brand's signature hot sauce.
Other Unique Aesthetic Attributes: The branding also features a bright, vibrant typeface, which is used to create a bold and eye-catching look. Additionally, the brand often incorporates Mexican-inspired imagery and motifs into its packaging designs.
If you guessed the hot sauce Cholula, you were correct. There’s nothing earth-shattering in there, and it’s still early days to see what I can extract, but it’s enough to suggest there’s a there there. One challenge that has always existed in marketing and brand-building is separating fact from fiction—or maybe it's better put as reality from aspiration. Every brand wants to be Nike, Apple, or Red Bull, but only a few of those brands exist for a reason.
There’s nothing inherently wrong with a brand failing to live up to its aspirations. The trouble comes when that truth isn’t internalized. One interesting aspect of this data is it represents a mirror. When I built Brand Tags, I frequently heard that agencies loved it because it let them tell the CMO their “baby was ugly” without needing to be the one delivering the message. “It’s not us saying it … it’s the people.”
Alright, onto some links.
Brands X AI
These amazing AI-imagined Nikes come from the Instagram account AI Clothing Daily. (h/t Leila)
Tools X AI
Img2prompt lets you take an image and approximate a prompt for it. It’s running on Replicate, which also gives you an API for the output. As a test, I’ve been giving it some CollXbs to see what it thinks the prompts are.
For this Ford x Grateful Dead Van, for instance, it returned:
a red and white van with a skull painted on it, a digital rendering by Christian Hilfgott Brand, behance contest winner, maximalism, behance hd, rendered in cinema4d, redshift
I believe these are tuned to Stable Diffusion, but it’s still an interesting way to learn the tricks of the prompt engineering trade.
ExplainPaper.com uses AI to help you understand confusing research papers.
Ytsummary is a Python script for doing YouTube video summaries with GPT3.
From ArsTechnica: Riffusion’s AI generates music from text using visual sonograms. If you want to try Riffusion, you can also play with it over on Replicate.
Of course, I couldn’t leave you without a few of my favorite CollXbs of the week. Thanks to the many of you who made some. If you haven’t had a chance, I release new ones every day. You can track when they go out on Twitter @BrXndAI. (If you really want a shot at making one and haven’t been able to, email me, and I’ll set you up with a code.)
Thanks for reading. Please share with others and send any feedback, thoughts, or links. I hope to see you on Discord.
Until next time,