Discover more from BrXnd Dispatch
BrXnd Dispatch vol. 003
On code, partnerships with AI, and how our tools shape us.
Hi everyone. Welcome to the BrXnd Dispatch. If you’re wondering what this is or how you got signed up, it’s a roughly-weekly newsletter of interesting stuff at the intersection of brands, AI, and creativity. You likely opted-in on BrXnd.ai. If this is not interesting to you, feel free to unsubscribe. Also, another reminder, NYC spring 2023 Brand X AI conference planning is in full effect. If you are interested in speaking or sponsoring, please be in touch.
On my mind this week has been code. Specifically, GitHub Copilot. If you aren’t familiar with Copilot, it’s Github’s AI assistant, which utilizes Open AI Codex. It integrates as a plugin with VSCode, Microsoft’s incredibly powerful open-source code editor, and just like Gmail’s recommendations, it suggests autocompletes in real time. Unlike Gmail, the recommendations are amazing. Not only are they generally good code, but they’re also specific to the codebase you’re working within. That means it gets to know the specific API or variable names you’re working with and recommends based on those context clues.
Here’s a real example from a project I’m working on.
It’s pretty amazing. Sure, it makes some stupid mistakes sometimes, but generally, it’s right on and particularly useful for remembering syntax and for writing API code. The former is directly competitive with something like StackOverflow, which is where I used to find answers to my dumb little syntax questions. The latter, on the other hand, is pretty directly competitive with engineers themselves. It’s not surprising that something like Copilot would excel at writing API code. After all, APIs are all documented—most of them by the code itself—and therefore pretty easy for a computer to make sense of. Here’s an example of Copilot writing some API code for Remove.bg, an AI service that can remove backgrounds from images.
Connecting APIs is a huge, time-consuming, and often annoying part of writing code, and it’s interesting to think about the implications of a system like this being particularly good at that problem.
Most fascinating to me, however, is how I’ve found myself subtly (and not so subtly) changing my own behavior to work better with Copilot. It’s real “we shape our tools, and thereafter our tools shape us” stuff. One thing I’ve been doing is writing far more descriptive variable names. That’s a generally good practice, but it’s particularly useful when partnering with an AI, as it gives the system some more context clues to work with. In the example above, I strongly suspect calling it
removeBgResponse helped it make sense of the rest of the fetch call I was making.
The other thing I’ve been doing is blocking out my code with comments. The impact of that is that sometimes Copilot will actually offer me all the code I need to make it work. Here’s a toy example of calling Contentful, a headless CMS. You can see the clear impact of writing some comments. I have no Contentful code in the codebase I was recording this in, so it was working purely off the name of the file and the comments I added. Pretty amazing stuff.
A roundup of other interesting stuff at the intersection of brands, AI, and creativity I ran into over the last week. Please keep the links coming. You can reply to this email, join us on BrXnd.ai Discord, or send me a note at email@example.com.
[Article] This Techcrunch article about diffusion tech is a pretty good summary of how these image models work. From the piece:
Diffusion systems have been around for nearly a decade. But a relatively recent innovation from OpenAI called CLIP (short for “Contrastive Language-Image Pre-Training”) made them much more practical in everyday applications. CLIP classifies data — for example, images — to “score” each step of the diffusion process based on how likely it is to be classified under a given text prompt (e.g. “a sketch of a dog in a flowery lawn”).
At the start, the data has a very low CLIP-given score, because it’s mostly noise. But as the diffusion system reconstructs data from the noise, it slowly comes closer to matching the prompt. A useful analogy is uncarved marble — like a master sculptor telling a novice where to carve, CLIP guides the diffusion system toward an image that gives a higher score.
[Fun] My local tiki bar is doing ChatGPT-generated cocktails:
[Instagram] Art-O-Maton imagines a World’s Fair that never was.
And, of course, some of my favorite BrXnd CollXbs of the week:
Thanks for reading. Please share with others and send any feedback, thoughts, or links. I hope to see you on Discord.
Until next time,