There's a version of this post where I talk about "leveraging AI to accelerate development velocity." That's not the version I'm writing.
The real version is messier and more interesting.
The Setup
I've been building this website — reaction21.com — on Umbraco 17 with ASP.NET Core. It's a full custom build: feature-based architecture, Vite for bundling, React components mixed with Razor views, custom block types for the CMS, accessibility requirements baked in from the start. Not a WordPress theme. Not a page builder. A real application that happens to be a website.
And I've been using Claude Code — Anthropic's CLI tool for AI-assisted development — throughout the build.
Not as a code generator I paste from. As an actual development partner that runs in my terminal, reads the codebase, writes code, runs tests, creates pull requests, and tracks tasks in ClickUp. When I say Claude Code built this website with me, I mean it in a pretty literal sense.
What "Using Claude Code" Actually Looks Like
Here's a thing people misunderstand about AI coding tools: the magic isn't in the autocomplete. It's in the context.
Claude Code can read every file in my project. When I ask it to build a new CMS block, it already knows the patterns — the ViewModel naming conventions, how Mappers are structured, that every block needs aria-labelledby, that SCSS files use BEM with the r21- prefix, that strongly-typed Umbraco model access is non-negotiable (the dynamic string-based access silently fails in ways that are nearly impossible to debug). It knows all of this because it read the codebase.
The result is that when I say "create a testimonial block with a quote, name, and title," it doesn't ask me how blocks work. It just builds one that fits.
The Custom Skills Part
Here's where it gets more interesting.
Umbraco's block system has enough moving parts that getting AI to work with it correctly isn't obvious. You need to create element types in the CMS via MCP (a tool that lets Claude talk directly to Umbraco's API), create data types, register blocks, generate C# model classes, write the Razor view, wire up SCSS, and validate the whole thing actually works in the browser.
That's about eight steps with a specific order, specific naming conventions, and specific gotchas — like the fact that the Umbraco MCP will return "success" even when it saved something incorrectly, so you always have to re-fetch and validate.
I wrote a custom skill file — essentially a detailed instruction document — that teaches Claude exactly how this project's block system works. File locations, naming conventions, the validation workflow, common failure modes, which MCP calls to make in what order.
Now when I say "create a stats block with a heading and three metric items," Claude follows the full workflow from element type creation to browser validation. It knows about the composition ID for block layout settings. It knows the Page Blocks data type ID that needs to be updated. It handles the whole thing.
This is what "custom AI skills" means in practice: you're not just using a general-purpose AI tool. You're building a tool that knows your specific system.
What Works Really Well
Repetitive structural work is nearly free. Every new block follows the same eight-step pattern. Writing that by hand took maybe 30-45 minutes per block. With Claude, it's 5-10 minutes including review. The quality is the same. Probably better, because it never forgets a step.
Test writing. I have a pretty clear unit testing plan for this project. Writing the actual tests is tedious work. Claude handles it well — it understands xUnit patterns, knows how to mock Umbraco services, follows the testing conventions we've established. I write the test for the first case of something, then Claude handles the variations.
The development workflow is genuinely automated. I tell Claude what I'm building, it finds or creates the ClickUp task, creates a branch, does the work, runs the tests, creates the PR. That workflow used to involve me switching between five different tools. Now I mostly stay in the conversation.
What Still Needs a Human
I want to be straight about this because a lot of AI coverage skips it.
Architectural decisions are still mine. Claude will implement whatever I ask. It won't tell me that a feature I want is structurally wrong, or that there's a better way to model the data. It executes. The judgment about what to build and how to structure it is still entirely on me.
Anything that requires reading between the lines. Client feedback, scope creep, knowing when "good enough" is good enough versus when something needs to be redone. That's experience talking, not a language model.
Catching subtle bugs. Claude writes good code. But it can also write code that looks correct and fails in a specific edge case. Code review still matters. Running the application and actually testing it still matters.
The initial setup. Getting Claude Code configured correctly, writing the skill files, establishing the conventions it should follow — that's investment upfront. Not difficult, but not automatic.
The Meta Part
Here's the thing I keep thinking about: I'm using AI to build a website that advertises AI services.
There's something genuinely funny about that. The about page, the services pages, the blog — all of it built with the help of a tool I also help clients evaluate and adopt. It's either very on-brand or very circular, depending on how you look at it.
What it's actually given me, though, is lived experience to draw from. When a client asks me whether AI-assisted development is real or hype, I don't have to speculate. I built something with it. I know where it's fast and where it's slow. I know what skills files do, and why they matter, and what happens when you skip them.
That's worth more than reading about it.
The Reaction21 website is built on Umbraco 17, ASP.NET Core 10, and Vite. If you're considering a similar build — or evaluating how AI development tools might fit into your own work — I'm happy to talk through what we learned.