May 2026

Know Yourself Before You Code

The advantage nobody tells you about.

I've been working with AI models for months in a way that, at first, felt like a minor detail. Then I realized it was almost everything.

I work in Spanish.

Not because the model doesn't understand English — it does, sometimes better than I do. I do it because in Spanish I think. Not just communicate: I process, nuance, hesitate, recognize when an idea doesn't hold together. In English I can transmit what I already know. In Spanish I can discover what I didn't know I thought.

The difference in results is real. And it's not about the language itself — it's about something more fundamental.


Affinity doesn't come pre-installed

When most people start using AI to code, the flow goes something like this: open the chat, type what you need, get code, paste it, move on. It works. Sometimes really well.

But there's another level. The difference is similar to the one between an outside consultant who shows up at your company with zero context and a colleague who's worked alongside you for years. Both can be brilliant. Only one knows how you think.

With an AI model, that affinity doesn't come configured out of the box. It's built. And the way you build it is, at its core, the same as with any working relationship: time, shared context, and — this is what gets underestimated most — communicating in the register where you actually think.


Your dominant language is a technical variable

There's a conversation that doesn't happen enough in software teams: the language someone thinks in isn't always the language they work in.

In the tech industry, English is the default. Documentation, frameworks, console errors, Stack Overflow discussions. All in English. That creates a silent assumption: if you can read and write technical English, you have everything you need.

That's not true.

Reading documentation in English is a skill. Thinking in English with the same density as in your native language is a completely different one. And when the task isn't reading someone else's code but building something from an idea — defining what problem we're solving, why, how it should behave — the density of thought matters.

A prompt written from native-language thinking has more layers. More implicit context. More nuance about what you want and what you don't. The model, which is extraordinarily sensitive to language, responds to that.


What this means for a team

I'm not saying everyone should use AI in their native language for everything. I'm saying it's worth asking.

For exploration, design, idea evaluation, feedback — the tasks where the quality of thinking determines the quality of the result — each person's dominant language is a real productivity variable.

A team that recognizes this isn't a team with a language problem. It's a team that understands its own tools.


Affinity goes both ways

There's more to this than language, even though language is the most tangible entry point.

When you've been working with a model in the same context for a while — same project, same way of framing problems, same thinking structure — the model starts to anticipate what you need with fewer instructions. Not because it learns in the technical sense of the word: because the accumulated context makes your questions more precise and its answers more calibrated.

It's a relationship. Not in a sentimental sense — in the sense of mutual calibration. And like any working relationship worth having, it requires investing time in building it before you demand results from it.

Know yourself first. Know how you think. What language you reach for when a problem gets hard. What level of detail you need in an answer. What kinds of questions move you forward and which ones stop you cold.


That's not knowing how to use AI. That's knowing how to use yourself.


ArchMindset — software, teams, and what changes when AI enters the picture.

← All articles