All posts

LLM owns the Code; Developer should own the Thinking

📅 Created 1 month ago 👁️ 156

🏷️ #llm #programming #architecture #product_owner #product_vision

🔗 Team Maturity

It’s tempting nowadays to outsource the code ownership to LLMs.
However, I’d like to leave the cognitive ownership to me.

Code ownership vs. Cognitive ownership

I shipped a couple of projects where the code was 90+% written by LLM. However, I still consider them my projects. Why? — because I designed their architecture, thought through the components and planned the data flow, API, DB structure, etc.

The LLM helped, but it didn’t make any crucial technical decisions that I wasn’t aware of. Moreover, it produced the code which I would most likely write in the way which I’d most likely write it.
Just way faster.
On the other hand, I can’t compare Codex, Copilot or Claude to my Logitech keyboard. The LLMs act like well trained and highly autonomous assistant who I trust on writing the code as per specification.

Don’t get me wrong; I understand that LLM can be even more autonomous and can make their own architectural decisions1. But I remain accountable for the project. Hence, I have to understand crucial details for the time when I run out of tokens 😀

How to remain on track?

Through exhaustive CLAUDE.md and thorough component/feature description.

Typical folder structure
📂 /
|
+-- 📂 claude/
|    |
|    +-- CLAUDE.md
|    |   # Component one
|    |   **Overview** ...
|    |   [Details](./Component_1.md)
|    |
|    +-- Component_1.md
|    +-- Component_2.md
|    ...
|
+-- 📂 src/
|    |
|    ...

Through gradual approach: first implement X, then Y. By “implementing” I mean writing the code. running the tests, verifying corner cases, etc. Much of this still remains on me: even if LLM writes/runs the tests, I still design them and control the outcome.

Through code review and refactoring (sometimes manual). Like in pre-LLM era, the product evolves and the decisions that were acceptable yesterday, are sub-optimal in today’s realm.

Is it really necessary?

Depends on the product. If it’s simple/standard enough, if it’s one-off task, such approach is overkill. I can rely on LLM and accept the result if it just works.
But if it’s a long-living project, I prefer to have more control (or, at least, understanding) on what’s going on under the hood.

Rise of management roles

Coding and technical decision-making is necessary but not the only prerequisite for a successful project. The project fails without proper steering: defining what needs to be done, justifying why and planning when. Sounds familiar? — right, this is about Product Ownership and Project Management.

Working with LLM resembles working with a team: defining product goal and strategy, setting the constraints, planning the development milestones.
This holds for technical aspects as well: the control focus shifts from concrete functionality implementation to broader architecture vision.

New risks

In the reality where developers stop writing the code, the real risk that they stop understanding it. When the architecture, data flow, and edge cases live only inside prompts and chat history, the project becomes fragile. The moment something breaks, no one knows where to look.

The “cognitive ownership” approach should mitigate this risk. Like other hedging mechanisms, it’s expensive and requires time and human power investment. Besides, it requires discipline: sticking to usual SDLC schedule, keeping 📂 claude/*.md up-to-date when technical decisions evolve, describing test scenarios and acceptance criteria.


Whilst LLM can write code faster than any human, but they can’t replace responsibility for the system. Someone, not something still has to understand the architecture, the trade-offs, and the consequences of each decision. In that sense, the developer’s role is not disappearing; it is shifting. The real ownership of a project is no longer about typing every line of code, but about keeping the mental model of the system alive.

References

  1. By speaking “own decisions”, let’s not forget that we are talking about a Language Model which was pre-trained on a huge amount of data and generates the text based on plausibility coefficients.
    On the other hand, do we make decision in radically different way? 🤔

SQL trick with many-to-many tables Tetr4mble game