From Vibe Coding to Vibe Designing
Why the future of AI is less about speed and more about shaping the right problems to solve.
For a while, I embraced the hype of “vibe coding”—letting AI accelerate my craft, spinning thoughts into code faster than ever.
But I began to notice the usual deeper tension: speed wasn’t the real constraint. The real challenge has always been understanding the problem itself.
AI may free us from boilerplate, but it also tempts us to skip the slow, messy, and human work of making sense of complexity.
The shift we need isn’t from slow coding to fast coding, but from vibe coding 🧑🏻💻 to vibe designing 🎨—using AI not just to produce more code, but to challange and clarify the problems worth solving in the first place.
I expected more from AI when it came to solutions, but it’s in framing the problems where it has truly surprised me.
TL;DR
AI makes it easy to code faster—but the real bottleneck is understanding the problem, not typing speed.
Chasing velocity risks producing the wrong solutions more quickly.
The real opportunity is to shift from vibe coding to vibe designing: using AI to deepen our modeling, clarify assumptions, and shape better problems before we ever write code.
The Velocity Trap: Why Faster Isn’t Better
There’s a seductive idea making the rounds: if AI writes code faster, our reviews need to speed up to keep pace. It sounds efficient; it’s often wrong. Hard projects don’t fail because people type slowly—they fail when teams have a shallow grasp of a complex problem. Shipping value means choosing the right problem, not just moving faster on any problem.
👉🏻 Good prompting is a result of deep understanding, not a substitute for it. 👈
If you rush the modeling and the language of the domain, you simply get the wrong thing, faster.
That said, pragmatism matters. Some teams have to ship quickly because of runway, market windows, or customer commitments.
You can keep speed without losing depth if you treat early cycles as discovery, not production—timebox spikes, make learning explicit, and delay irreversible decisions. In Wardley mapping terms, during the Genesis phase, fast, cheap experiments are sane—you’re exploring unknowns—so long as you deliberately slow down as the work evolves toward Product and Commodity.
It’s worth acknowledging that AI‑assisted iteration can actually increase understanding through trial and error. But this is still in the frame of understanding the problem domain: let the system generate divergent approaches, compare failure modes, and compress the try–see–refine loop. Here vibe coding is vibe‑designing helping us to commit to a clearer model.
More code has a cost. Every new line expands the surface area you must test, review, and maintain. As Kent Beck and others have argued for years, low coupling and high cohesion aren’t academic preferences—they’re how you stay solvent. The leverage comes from less, but better code.
To simplify: more AI code throughput → higher risk of unnecessary coupling → bigger ripple costs → higher maintenance cost 💸
The Imperfect Mirror: AI Isn’t Your Junior Partner
It’s tempting to treat an AI assistant like a human junior developer.
The analogy breaks quickly.
AI can produce immaculate‑looking artifacts, better than any junior, and sometimes than senior, but it hides subtle semantic bugs because it lacks embodied, team‑specific context.
It mimics the form of good code without holding the meaning.
A human junior grows. They absorb context, learn from mistakes, and eventually become the person who can prompt an AI well.
By design, the tool remains a perpetual junior: faster, broader, sometimes astonishing—but it doesn’t own assumptions, question foundations, or develop the kind of judgment that turns good builders into great ones, unless explicitly instructed and solicited.
We coach a junior to replace us; we prompt a tool to execute.
Philosophically, this echoes Plato’s Divided Line: fluent artifacts aren’t the same as understanding. Useful, yes. Wise, no. (A nice primer lives at USC Dornsife; and a deep dive at my article 🧠 From Eikasia to Noesis: What Plato Can Teach AI 🏛️.)
The Soul of Design: More Than Just Code
In Domain‑Driven Design, Entities have identity and lifecycle; Value Objects are defined by attributes and are immutable. The “right” choice is contextual—and that context is social, technical, and business all at once.
AI can generate code in either style. What it can’t do for you is make the design decision in your domain. That isn’t a typing problem; it’s a modeling problem. Modeling is where the leverage hides.
Design in Context: E-Commerce Example
Here, Customer is the core entity. The owning team manages the customer’s lifecycle.
# Illustrative example; imports omitted for brevity.
class Status(Enum):
NEW = "new"
ACTIVE = "active"
SUSPENDED = "suspended"
# Customer is an ENTITY with identity and lifecycle.
@dataclass(slots=True)
class Customer:
customer_id: UUID = field(default_factory=uuid4, compare=True)
email: str = field(compare=False, repr=False)
status: Status = field(default=Status.NEW, compare=False)
# One-to-many: a Customer can have multiple addresses.
addresses: list[Address] = field(default_factory=list, compare=False)
def __hash__(self) -> int:
# Hash by identity for correct dict/set behavior.
return hash(self.customer_id)
# Address is a VALUE OBJECT; immutable and without lifecycle.
@dataclass(frozen=True, order=True, slots=True)
class Address:
street: str
city: str
postal_code: str
label: str | None = None # e.g., "billing", "shipping"
Design in Context: Geospatial Example
Here, roles flip. Property is the core entity, and its lifecycle (e.g., for_sale
, sold
) is central. Note the language change from Customer to Occupant—that’s Ubiquitous Language emerging from actual conversations with stakeholders.
# Illustrative example; imports omitted for brevity.
from datetime import date
class PropertyStatus(Enum):
FOR_SALE = "for_sale"
SOLD = "sold"
UNDER_RENOVATION = "under_renovation"
# Property is an ENTITY with identity and lifecycle.
@dataclass(slots=True)
class Property:
property_id: UUID = field(default_factory=uuid4, compare=True)
street: str = field(compare=False)
city: str = field(compare=False)
valuation: float = field(compare=False)
status: PropertyStatus = field(default=PropertyStatus.FOR_SALE, compare=False)
# One-to-many: a Property can track multiple occupants.
occupants: list[Occupant] = field(default_factory=list, compare=False)
def __hash__(self) -> int:
return hash(self.property_id)
# Occupant is a VALUE OBJECT: who is at the property (immutable).
@dataclass(frozen=True, order=True, slots=True)
class Occupant:
name: str
lease_expires: date
Takeaway: These choices have little to do with raw typing speed and everything to do with domain insight. Wrong choices create costly ripple effects when you need to change course—maybe you’re lucky and you don’t, but more often you propagate conceptual tech debt through tons of lines of code, as I’ve done and seen in my career.
Use AI as a Socratic partner—to question assumptions, generate variations, explore edge cases, and help you explain why this design is right for this context.
An AI can generate code for both models, but it cannot make the crucial design decision without your specific business, technical, and socio-technical context. This reveals a deeper set of challenges:
Context Awareness: We must first be consciously aware of what the AI doesn't know in order to feed it the right information—a meta-level skill of "context curation."
Knowledge Externalization: This often requires a significant upfront investment in making implicit knowledge explicit, translating years of team wisdom and unwritten rules into a format the AI can consume.
Assumption Validation: Most critically, what happens when that explicit knowledge becomes outdated? An AI, lacking the ability to question its (or yours!) foundational assumptions, will confidently produce flawed outputs based on obsolete information.
From “Vibe Coding” to “Vibe‑Designing”
We hear a lot about vibe coding. The frontier is vibe‑designing: using AI not just to write code, but to weigh trade‑offs, compare models, and sharpen your language. I’ll dig into concrete tools for that in a future piece.
Balancing Speed and Depth (When Shipping Is Non‑Negotiable)
Use AI as a multiplier of options and a critic, not as the arbiter of the model.
Run two loops in parallel: a discovery loop (throwaway spikes; use AI to enumerate options and generate adversarial cases) and a delivery loop (hardened design; tests and invariants).
Define phase‑change triggers: when logic repeats, when shared language emerges, when SLAs or regulations appear—pause to model.
Practice: A 6‑Minute Pre‑Prompt
Before prompting (3 minutes):
Write three domain invariants (must always hold).
List three edge cases you’re likely to hit.
Sketch the lifecycle of the key entities (states and transitions).
While prompting (2 minutes):
Ask the AI to attack your invariants and find your assumptions with counterexamples.
Request counter‑models and to explicitly state trade-offs.
Generate a handful of adversarial examples you’d include in tests.
After prompting (1 minute):
Decide what changed in your model and why.
Capture ubiquitous language glossary.
Turn the adversarial examples into unit/integration tests.
From Speed to Clarity
The future isn’t a fork between a mindless factory and a thoughtful monastery. It’s a pragmatic mash‑up of both. In early, high‑uncertainty phases (Wardley mapping’s Genesis), moving fast is how you learn; later, deceleration is how you avoid expensive rework. The question isn’t whether we’ll use AI to accelerate our work, but how.
One path treats AI as a pure accelerator—turning the crank faster on existing processes. That path risks amplifying flaws and building the wrong thing beautifully. The better path treats AI as a cognitive lever. Use the time you save on boilerplate to model better, challenge assumptions, and clarify trade‑offs before you write a line of code.
In a world where competitors are happily turning the crank faster, the teams that win won’t just be the fastest—they’ll be the clearest.
Your Turn: From Vibe Coding to Vibe Designing
Have you felt pressure to “think faster” just to keep up with AI’s speed?
What’s the bigger risk in your work: shipping flawed code, or shipping perfect code for the wrong problem?
What’s one step your team could take this week to reward vibe designing—deeper thinking over raw speed?