Documentation That Developers Actually Write and Read
Documentation That Developers Actually Write and Read
I've watched three different teams spend weeks building beautiful documentation sites. Confluence spaces with custom templates. Notion databases with ownership tags. GitBook deployments with search and versioning. Six months later, every one of those systems was a graveyard. Pages last updated in Q1. Broken links everywhere. New engineers ignoring the docs entirely because they learned on day two that the docs were lies.
The problem isn't that developers are lazy. The problem is that most documentation strategies are designed by people who think documentation is a content problem. It's not. It's an incentive problem and a proximity problem. I've found exactly one approach that works long-term, and it looks nothing like what most teams attempt.
The Contrarian Take: Most Documentation Should Not Exist
Here's what I believe after 8 years of trying everything: 80% of the documentation teams write should never have been created. It was outdated before the ink dried because the code it described was still changing. It created a false sense of understanding that was worse than no docs at all.
The documentation that survives and stays accurate has three properties:
- It lives next to the code it describes (not in a separate system)
- It explains WHY, not WHAT (the code already explains what)
- It breaks the build when it's wrong (enforced accuracy)
Everything else is a vanity project.
What Developers Actually Read
I ran an experiment across two teams. I instrumented our documentation with analytics: page views, time on page, and search queries. Here's what I found after 90 days:
| Documentation Type | Views/Month | Avg Time on Page | Updated Regularly |
|---|---|---|---|
| API reference (auto-generated) | 342 | 45 seconds | Yes (automated) |
| Architecture Decision Records | 128 | 3.2 minutes | Yes (append-only) |
| Inline code comments (why-focused) | N/A (in IDE) | N/A | Yes (code review enforced) |
| Onboarding guide | 23 | 8.1 minutes | No (stale after 2 months) |
| Module README files | 89 | 1.4 minutes | Sometimes |
| Confluence design docs | 12 | 32 seconds | No (abandoned) |
| Runbooks | 67 | 5.7 minutes | Only after incidents |
The pattern is clear. Developers read documentation that's either auto-generated (always accurate), append-only (ADRs that never need updating), or embedded in code (can't ignore it). They ignore everything that requires manual maintenance in a separate system.
The Three Documentation Types That Survive
Type 1: Architecture Decision Records (ADRs)
ADRs are the single highest-ROI documentation practice I've found. They record why a decision was made, what alternatives were considered, and what tradeoffs were accepted.
# ADR-012: Use Event Sourcing for Payment State
## Status
Accepted (2026-01-15)
## Context
Payment state was stored as mutable rows. We had 3 incidents in Q4
where state corruption caused double-charges. Debugging required
reconstructing state from logs manually.
## Decision
Payment state transitions use event sourcing. Current state is
derived from replaying events. Events are immutable and append-only.
## Consequences
- Slower reads (must replay events or maintain projections)
- Storage cost increase (~40% more for payments table)
- Complete audit trail eliminates the double-charge class of bugs
- Team needs training on event sourcing patterns
## Alternatives Considered
- State machine with audit log: simpler but doesn't give us replay
- Soft deletes with versioning: doesn't capture intent of transitionsThe key insight: ADRs are append-only. You never update an ADR. If a decision changes, you write a new ADR that supersedes the old one. This eliminates the staleness problem entirely.
I keep ADRs in docs/adr/ in the repo, not in Confluence. They're versioned with the code. They show up in code search. They're reviewed in PRs.
Type 2: Inline Context Comments
Not comments that explain what code does. Comments that explain why the code does something non-obvious.
// BAD: explains what (the code already says this)
// Iterate through users and filter by active status
const activeUsers = users.filter(u => u.isActive);
// GOOD: explains why (context that isn't in the code)
// We filter before the DB query instead of using a WHERE clause
// because the isActive flag depends on a 30-day rolling window
// that's computed client-side. See ADR-008 for why we didn't
// move this to a database computed column.
const activeUsers = users.filter(u => u.isActive);The "why" comment survives because it's attached to the code it describes. When the code changes, the comment is right there in the diff. Reviewers will catch stale comments. A wiki page 3 clicks away? Nobody checks that.
Type 3: Executable Documentation
Documentation that's verified by your CI pipeline. If the docs are wrong, the build fails.
// API documentation that's tested
/**
* @example
* const result = await processPayment({ amount: 1000, currency: "USD" });
* // Returns: { id: "pay_xxx", status: "captured", amount: 1000 }
*/
export async function processPayment(input: PaymentInput): Promise<PaymentResult> {
// ...
}
// In your test suite:
// Extract @example blocks and run them as tests
// If the example doesn't match runtime behavior, the test failsOther forms of executable docs:
- OpenAPI specs generated from code annotations, validated in CI
- Database schema docs generated from Prisma/TypeORM models
- Dependency graphs auto-generated and committed to the repo
The Documentation Proximity Principle
I've developed a rule I call the Proximity Principle: documentation accuracy is inversely proportional to its distance from the code it describes.
Distance from code Accuracy after 6 months
────────────────── ───────────────────────
Inline comments ~85% accurate
Same-repo README ~60% accurate
Same-repo docs/ ~45% accurate
Wiki (linked from ~20% accurate
repo)
Wiki (not linked) ~5% accurate
This isn't about discipline. It's about friction. Updating an inline comment takes 2 seconds during a code change. Updating a wiki page takes context switching to a browser, finding the right page, editing, and saving. That friction compounds across hundreds of changes until everyone gives up.
The Stealable Framework: The WRITE System
Here's the process I use to build documentation that lasts:
W - Where it lives matters most. All docs in the repo. No external wikis for code documentation. Period.
R - Review docs in PRs. If a PR changes behavior, the reviewer checks for corresponding doc updates. Add a PR template checklist item: "Updated relevant ADRs or inline comments? [ ]"
I - Incentivize through automation. Auto-generate everything you can. API docs from code. Dependency graphs from imports. Schema docs from models. What's automated can't go stale.
T - Trim aggressively. Every quarter, delete documentation that hasn't been viewed or updated in 90 days. Dead docs are worse than no docs because they erode trust in all docs.
E - Enforce with CI. If an ADR references a file that no longer exists, fail the build. If an API example doesn't compile, fail the build. If a README references a deprecated config option, fail the build.
# Example CI check for documentation freshness
documentation-check:
script:
- node scripts/check-adr-references.js # verify ADR file references exist
- npx tsdoc-testify src/ # verify @example blocks compile
- node scripts/check-dead-links.js docs/ # verify internal links resolve
rules:
- changes:
- "docs/**"
- "src/**"What to Do Monday Morning
- Audit your existing docs. Check view counts. Delete everything with zero views in 90 days. Yes, really.
- Start writing ADRs. Begin with your next technical decision. It takes 15 minutes. Keep it in the repo.
- Add a PR template item. "Did you update relevant docs?" This alone increased our doc update rate by 3x.
- Kill your wiki for code docs. I know this sounds extreme. Move the 5 pages that actually get viewed into the repo. Let the rest die.
Documentation isn't a writing problem. It's a systems design problem. Design the system so that accurate docs are the path of least resistance, and your team will write docs that developers actually read. Fight human nature with a wiki-based strategy, and you'll lose every time.
$ ls ./related
Explore by topic