LLMs as Interface Translators: From Images to Code and Back Again
LLMs as Interface Translators: From Images to Code and Back Again
We're living in this wild moment where LLMs are becoming these universal translators between different domains. The hardest thing about living in the future is that we're figuring it out as we go, but these two recipes show how we're starting to make sense of it all.

Using Claude to Craft Midjourney Prompts
I've been obsessed lately with how Claude can serve as this bridge between written ideas and visual concepts. It's surprisingly good at it — and also surprisingly bad sometimes, which is part of the fun.
The workflow is dead simple:
Throw your content at a key-themes prompt
Use the visual metaphor generator to explore different angles
Let Claude craft those detailed, evocative descriptions that Midjourney needs
What's fascinating here is the human-computer interface shift. We're moving from "I need to learn prompt engineering" to "I'll let one AI help me talk to another AI." It's like Windows 95 dragging us from command line into the graphical world — suddenly the technical barriers start falling away.
The Visual Metaphor Generator gives you this framework:
Pull out the core ideas and emotions
Translate those into visual symbols and environments
Build detailed descriptions that capture the essence
Make them specific enough for Midjourney to run with
Lift and Shift: Mining Code for Knowledge
The second recipe tackles that eternal problem: how do we learn from existing code instead of constantly reinventing wheels?
This approach uses:
Simple command-line tools like
npx repomix
to extract the good stuffSpecialized prompts to process what we find
A structured way to document both requirements and those "wish I'd known that earlier" lessons
It's surprising at how dumb the process used to be — manually digging through repositories — and how amazing it is now that we can have these tools do the heavy lifting.
This really shines when paired with a memory system. The AI remembers context across sessions, which pushes things to the next level when you're working through complex codebases.
Both recipes show how we're creating these multi-layered translation systems — whether it's turning concepts into visual prompts or transforming tacit code knowledge into explicit documentation. Let's figure out what this means in the pragmatic world, not just the things that these amazing demos hint at.
We're just starting to see how these tools reshape our creative and technical workflows in 2025. The interfaces between humans, code, and visual systems are blurring in ways that would have seemed magical just a few years ago.