Claude Code Mastery11 / 12
Advanced Patterns — Hooks, MCP Servers, Custom Tools, System Prompts
Once you've outgrown the defaults: hooks for deterministic side effects, MCP servers for org-specific data, custom tools, and system-prompt surgery.
The defaults in Claude Code take you a long way. But there is a point — usually around month three of serious use — where you outgrow them. You want a deterministic side effect after every tool call. You want the agent to read your internal wiki. You want to ship a custom tool that talks to your company's bespoke API.
That is what this article is for. Four patterns:
- Hooks — deterministic glue between agent steps.
- MCP servers — company data the agent can read.
- Custom tools — your own commands wired into the agent.
- System-prompt surgery — when prompts alone won't get you there.
1. Hooks — deterministic glue
A hook is a script that fires on a known event: onFileEdit, onShellRun, onSessionStart, onSessionEnd, etc. They are not "AI"; they are plain shell.
Why they matter: they let you add deterministic guarantees around an otherwise probabilistic agent.
Examples I run on every project:
{
"hooks": {
"onSessionStart": "scripts/claude-onstart.sh",
"onFileEdit": "scripts/claude-onedit.sh",
"onSessionEnd": "scripts/claude-onend.sh"
}
}
scripts/claude-onstart.sh:
#!/usr/bin/env bash
# Snapshot the working tree before the agent starts.
git stash push -u -m "claude-pre-session-$(date +%s)" --keep-index || true
scripts/claude-onedit.sh:
#!/usr/bin/env bash
# After every file edit, run prettier on the changed file only.
file="$1"
if [ -f "$file" ]; then
pnpm prettier --write "$file" --log-level silent
fi
scripts/claude-onend.sh:
#!/usr/bin/env bash
# At end of session, ship the transcript.
cp ~/.claude/sessions/last.jsonl /var/log/claude/$(date +%F-%H%M).jsonl
Three hooks. Every session is now snapshot-able, formatted, and audited. None of these are AI; all of them are infrastructure.
2. MCP servers — your data into the agent
Model Context Protocol (MCP) is the standard for letting an agent read external data sources at session time. Instead of pasting a confluence page, you connect the MCP server and the agent can browse on demand.
The patterns I've seen pay off:
- MCP for the bug tracker. Agent reads the ticket, including comments, before it starts implementing.
- MCP for the staging DB schema. Agent can introspect tables instead of guessing.
- MCP for runbooks. Agent reads
docs/runbooks/<incident-type>.mdautomatically.
Configuration lives in .claude/mcp.json:
{
"servers": {
"linear": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-linear"],
"env": { "LINEAR_API_KEY": "${LINEAR_API_KEY}" }
},
"company-docs": {
"command": "node",
"args": ["./mcp/company-docs.js"]
}
}
}
Two warnings:
- MCP servers run with your privileges. Treat them like CLIs; review their code before plugging them in.
- MCP can leak. If you connect a server that exposes customer data, every session can read it. Scope tightly.
3. Custom tools — your own verbs
Sub-agents and slash commands let you reuse text. Custom tools let you add callable verbs.
Example: a deploy-preview tool. The agent should not run shell to deploy a preview — too many ways for that to go wrong. Instead, expose a tool:
{
"tools": {
"deploy-preview": {
"description": "Deploy a preview environment for the current branch.",
"schema": {
"type": "object",
"properties": {
"branch": { "type": "string" }
},
"required": ["branch"]
},
"command": "scripts/deploy-preview.sh"
}
}
}
Now the agent sees a verb called deploy-preview with a typed input. The script you write decides what it does. The agent cannot escape the contract.
This is the same pattern I'd use for:
run-load-testmigrate-stagingtail-prod-logs(read-only)notify-pager-duty
Each one is a guard rail that says "the agent can do this thing, exactly this way, never deeper."
4. System-prompt surgery
Sometimes a sub-agent's behaviour is almost right and a tweak in the system prompt closes the gap. Three patterns:
4a. The "rules" block
Always put non-negotiables under a header named rules:
# rules
- Never add @ts-ignore.
- Never modify files in /infra/.
- Always run `pnpm test` before declaring success.
The agent treats these as harder than mere instructions because they look like a list of laws.
4b. Output schema first
If you want structured output (handoffs, JSON, YAML), put the schema before the prose explanation. Agents anchor on the first 200 tokens.
# output schema (always at the top of the system prompt)
status: ok | needs-human | failed
artifacts: [path: string]
notes: string
next: code-reviewer | release-bot | done
# rationale (below the schema, never above)
# You output this so the next agent in the pipeline knows exactly where you are.
4c. Counter-examples
Show what NOT to do:
# bad output
"Done. The cache is implemented."
# good output
status: ok
artifacts: [{path: "lib/cache.ts"}]
notes: "LRU + TTL implemented. All tests green."
next: code-reviewer
Counter-examples are weirdly more effective than positive examples for getting compliance. I do not have a deep theory of why; empirically, it works.
When you need all four together
The advanced workflow that lights up real teams:
hook (onSessionStart) → stash snapshot
agent reads CLAUDE.md → context
agent calls MCP server → reads Linear ticket
agent calls custom tool → run-load-test
hook (onFileEdit) → prettier
agent emits structured handoff → code-reviewer
hook (onSessionEnd) → ship transcript
Every layer adds determinism, traceability, or capability. None of them are "the agent." All of them are scaffolding.
That is the lesson, four articles deep into advanced patterns: the agent is the small part. The scaffolding is the engineering work.
Last article in the series: The Future of Agentic Development — where this is going in 2026, what I'd bet on, and the line beyond which I'd be sceptical.
Series — Claude Code Mastery
- Part 01Claude Code vs ChatGPT vs Copilot vs AgentsMost developers are using the wrong AI tool for the wrong job. Here is why — and what to do instead.
- Part 02Installation + The Antigravity WorkflowInstalling Claude Code is a 30-second job. Setting up the workflow that makes the agent feel like it's doing the heavy lifting — that's the part nobody writes about.
- Part 03Writing Prompts That Work"Make it better" is not a prompt. "Refactor this for performance" is not a prompt. Here is the four-part structure that makes Claude Code actually finish what you asked.
- Part 04Slash Commands — Building a Project from A to Z/init, /agents, /compact and your own custom commands. The toolkit that lets you go from empty folder to running app without leaving the Claude prompt.
- Part 05Sub-Agents — The 11 Specialized Experts Inside Claude CodeSlash commands reuse prompts. Sub-agents reuse whole personas — code-reviewer, test-writer, migration-runner. Here is the team you should have on day one.
- Part 06Production Codebase SafetyPermissions, guardrails, and what not to automate. The unsexy article that decides whether Claude Code becomes infrastructure or becomes the reason you got paged at 2 AM.
- Part 07Multi-Agent PipelinesChaining sub-agents, running them in parallel, and the patterns for 'review-while-coding' without losing your mind. Where Claude Code starts to feel like a small engineering org.
- Part 08Building Complete FeaturesFrom Linear ticket to merged PR with Claude Code. A real, honest walk-through — what the prompt looked like, what the agent got right, what I caught in review.
- Part 09Testing and DebuggingLetting Claude Code own the entire test loop. Including the parts that make engineers nervous: regressions, flakies, integration tests, and the stack-trace whisperer.
- Part 10Team WorkflowsHow engineering teams are actually integrating Claude Code today. The shared .claude/ folder, the review rituals, and the anti-patterns I keep seeing in the wild.
- Part 11Advanced Patterns — Hooks, MCP Servers, Custom Tools, System Prompts — you are hereOnce you've outgrown the defaults: hooks for deterministic side effects, MCP servers for org-specific data, custom tools, and system-prompt surgery.
- Part 12The Future of Agentic DevelopmentWhere this is going in 2026 and beyond. What I'd bet on, what I would not, and the line where I get sceptical of the hype.