Djangocon EU: a practical guide to agentic coding for Django developers - Marlene Mhangami¶
(One of my summaries of the 2026 Djangocon EU in Athens).
You could get code completion in your IDE for a while. Since about two years, you can generate code with an LLM by describing what you want and copy/pasting the results into your program. Nowadays you can do agentic coding. You give an AI agent access to your environment so that it can actually create the files (including a file with tests) on its own.
A definition: an AI agent is an LLM (large language model) that calls tools in a loop to achieve a goal.
You start with a “prompt”, the agent starts its loop and after a while it is done. In the mean time you can interrupt/steer or give more “context”. Within the loop, three things happen:
Gather context. A “context” is something the LLM can use to gather information. You can attach a github issue, for instance. Or you attach files. Tell it “follow the conventions in this file. Instruction files also help:
copilot-instructions.md,agents.md.There’s something called “context engineering”. If your AI agent starts to remember too much context, it starts to be less effective. So cleaning up old parts of the context might help.
Take action. MCP, model context protocol, is an open protocol for giving agents access to tools. You can write your own, btw.
Related, you can also add “skills”. A skill is a markdown file describing to your agent how to do something, like “run this python script to generate an email” and “send it using this method”.
Verify results. The amount of code submitted to GitHub is increasing a lot. They think there will be 14 billion commits this year, to 1 billion in 2025. Much of the increase is in AI-assisted programming. What about the quality? That’s where verifying the results, especially automatically, comes in.
Clean code amplifies AI gains, according to a study (link is in her slides). If your project is kept neat and tidy and tested, the results are better. Your agent is kept in check, that way. Unchecked AIs result in a mess.
Test driven development can help. But watch out: an agent often generates its own tests and they’re not always right or complete or honest.
She demoed https://playwright.dev , a website testing tool that an agent can steer.
Some git/github tips:
Commit often.
Run experiments in branches.
Watch out when opening pull requests. Many projects get a lot of pull requests, swamping the maintainers. So double-check the code you’re submitting.
Unrelated photo explanation: a trip in November to the Mosel+Eifel region in Germany. The monastry church of Maria Laach (next to a lake that’s all that’s left of a vulcano that exploded 10 thousand years ago).