Building Without a Mind's Eye
Aphantasia, autism, and how agentic AI finally closed the gap between what I know and what I can build.
The Setup "Close your eyes and picture a beach."
Most people see one. I get nothing — a blank, dark screen where the image should be. Not a faint impression, not a sketch. Nothing at all.
I have aphantasia. It means I have no voluntary visual imagination — no mental imagery of any kind. Ask me to picture my front door and I'll draw a complete blank. It's been this way my entire life, which means for most of that life I had absolutely no idea it was unusual.
I also came to understand, later in life, that I'm autistic. The two aren't always linked, but for me they compound in an interesting way. Aphantasia makes it genuinely hard to spontaneously articulate an idea — if I can't see it in my head, describing it from scratch requires real cognitive effort. Autism adds its own layer: structured, direct communication works. Open-ended creative briefs, less so.
For thirty years, this has been the backdrop to my career as a full-stack developer. Most of the time it didn't matter — I was writing code, solving technical problems, building systems. Logic, patterns, architecture. None of that requires a mind's eye. But design, UI, creative direction? That's where things got complicated.
The Problem "What do you envision?" — the question I dreaded.
Early in a project, clients and collaborators often ask some version of that. It's a reasonable thing to ask a developer who's also handling the design work. For years, my honest answer would have been: I genuinely don't know yet, because I can't see it.
In practice, that meant I would actively turn down design work rather than face what I knew was coming — sitting in front of a blank screen for hours, knowing exactly what I wanted to achieve but completely unable to get it out. I had the taste, the vision, the technical ability. What I didn't have was the bridge between thinking it and making it real. It was like trying to draw a picture in a dark room, at night, blindfolded.
That's not a lack of skill. I have strong aesthetic preferences. I can look at two designs and immediately know which is better and explain precisely why. But generating that vision from nothing — conjuring it unprompted from a blank canvas — is a fundamentally different process for me than it is for most people. I need something external to react to. A concrete starting point to push against.
Aphantasia affects an estimated 1–3% of the population. It's the absence of voluntary mental imagery — the inability to form pictures, sounds, or sensations in the mind on demand. It was only formally named in 2015. Many people don't discover they have it until adulthood, because it's not obvious that other people's inner experience is visually different from your own.
For most of my career, my workaround was iteration. I'd build something rough, look at it, react to it, improve it. Slow but effective. The gap between concept and output was just wider for me than for other developers who could sketch something in their head first. I worked around it by moving quickly from nothing to something — because something I could work with.
The problem is that "building something rough to react to" is expensive. It takes time. In client work, it can look like uncertainty when it's actually just process. And when a project moves fast, that process doesn't always have room to breathe.
The Shift Agentic AI changed the equation.
When I started working seriously with agentic AI tools — Claude Code, Google Stitch, and Claude itself as a genuine collaborator rather than an autocomplete engine — something shifted that I didn't fully anticipate.
The tools gave me something to react to, almost instantly.
Instead of starting from a blank canvas, I could describe intent — sometimes loosely — and get back something concrete. Not always right. Often needing significant work. But something. A scaffold. A starting point my brain could grab onto and start pulling apart.
I don't need to visualise the end result any more. I need to recognise it when I see it — and that I can do.
The shift sounds small, but its effect on my workflow was substantial. The cognitive overhead of generating ideas from nothing — which for me genuinely takes effort — collapsed. I was spending less energy on the starting gun and more energy on the actual race.
Google Stitch in particular was revelatory for the UI side. Describing a component or layout in plain language and getting back a visual I could critique and iterate on — that's almost exactly how I naturally work. I react, refine, redirect. Give me something wrong and I'll tell you exactly how to make it right. Give me a blank page and I'll get there eventually, but it costs more.
The Unexpected Aphantasia might be an advantage.
Here's the thing I didn't anticipate: the absence of mental imagery might actually be a mild advantage in an AI-augmented workflow.
People with strong mental imagery can get attached to the picture in their head. They have a clear, specific vision and the tools don't always match it. That gap between imagined ideal and generated output can be genuinely frustrating. I don't have that problem. I have no pre-formed image to protect. Every AI output is evaluated on its own merits, not against a mental reference I can't unsee.
This isn't a post claiming aphantasia is secretly a superpower. It presents real challenges and I'm not going to dress those up. But in a specific context — iterative, AI-assisted creative work — the absence of strong visual preconceptions does seem to reduce a particular kind of friction.
My evaluation is faster and less ego-involved. Does this work? Yes or no. What needs to change? I can answer those questions cleanly, without mourning the gap between what I imagined and what exists. Because I didn't imagine anything.
The Bigger Picture What this means for neurodivergent developers.
I've spent thirty years in an industry that doesn't talk much about how cognitive differences affect development workflows. There's plenty of conversation about accessibility in what we build — rather less about accessibility in how we build it.
Agentic AI is one of the first tools I've encountered that genuinely adapts to how I think, rather than requiring me to adapt to it. It externalises the generative work — the conjuring-from-nothing phase — and makes the iterative, evaluative work that I'm strong at the dominant part of the process.
For autistic developers who struggle with ambiguity, working with an AI that can take a loose brief and structure it into something concrete is similarly useful. For developers with ADHD, the ability to context-switch quickly without losing state has real value. These aren't incidental benefits — they're structural advantages for how neurodivergent brains often work.
None of this replaces skill, taste, or experience. Thirty years of knowing what good code and good design look like is doing most of the heavy lifting. The AI isn't making decisions — I am. It's just removed a friction point that has quietly cost me energy for most of my career.
I built my most recent project — an e-commerce site for an antique watch dealer — almost entirely through agentic AI tooling, with Claude Code handling the heavy lifting on implementation. It was the first project in my career where the gap between what I knew I wanted and what I was able to produce felt genuinely small. Not because the tools are magic, but because for the first time, the workflow matched how my brain actually operates.
That feels worth writing about. And if it's useful to one other developer who's spent years quietly working around a visual imagination that just isn't there — good.
"Aphantasia means I've never seen the picture in my head. Agentic AI means I don't need to."
Steve Lavine is a full-stack developer and founder of Lavine Web & AI Solutions, working with creative agencies and SMEs across the UK, Australia, and beyond. lavine.dev