Chapter 8 is the patterns and practices chapter. It captures the reusable concepts that keep your domain code lean and your runtime scenarios readable: security, resilience, observability, integration rules, and other "plumbing" that should be consistent.
In this article I explain what belongs in chapter 8, what to keep out, a minimal structure you can copy, plus a small example from Pitstop.
This post is about chapter 8: Cross-cutting concepts,
the first chapter in the “Reusables, decisions, and qualities” group.
Chapters 5, 6, and 7 described structure, runtime, and deployment.
Chapter 8 is where I document the reusable ideas that make those chapters readable and maintainable.
I think of chapter 8 as the patterns and practices chapter.
It is often the “non-functional” code. Not the business logic, but everything needed to make that core behave correctly.
Note
arc42 calls this chapter “Cross-cutting concepts”.
In practice, I often just call it “Concepts” as I treat it as “concepts relevant to the whole system at this level of detail”.
For a product landscape that can mean platform-wide conventions.
For a single microservice it can mean service-wide patterns and internal rules.
What belongs in chapter 8 (and what does not)
The main job of chapter 8 of an arc42 document is to answer:
Which patterns and practices should be applied consistently across the system, and how do we do that?
What belongs here:
Patterns and rules that apply across multiple building blocks or scenarios, not just one module.
Reusable conventions you want implemented consistently over time, even if they currently apply only once.
“Plumbing” that supports the domain but is not domain logic itself: the infrastructure behavior that makes core code work correctly.
Concept-level configuration behavior:
what a mode or flag means and which behavior changes when it toggles.
The where and how to configure it usually lives in chapter 7.
Shared domain definitions (aggregates, state machines, vocabulary) that every module depends on.
A simple test that works well:
If you want developers to implement something the same way in multiple places over time, document it here.
Link to it from the scenarios in chapter 6 and from relevant building blocks in chapter 5.
What does not belong here:
Feature-specific domain rules and workflows.
Those belong in the building blocks (chapter 5) and scenarios (chapter 6).
A repeat of the runtime scenarios.
Chapter 8 should let chapter 6 stay lean.
A raw list of configuration settings.
Chapter 8 should explain what a setting means and why it exists, not list every key in the system.
The full reference is better placed in chapter 7 or a dedicated config reference.
Highly local implementation details that are unlikely to be reused.
Those belong close to the code, or in an ADR when it is a decision with consequences (chapter 9).
Hard architectural constraints or enterprise policies.
Mandates like “Cloud First” or compliance rules belong in chapter 2.
Chapter 8 documents the reusable patterns you designed, not the constraints you were forced to follow.
Tip
Chapter 8 is where you replace repeated paragraphs in chapter 6 with one link.
That is a good trade.
Common concept categories
Not every system needs all of these, but this list helps as a starting point.
Pick what applies:
Domain model: aggregate boundaries, state machines, shared vocabulary, key invariants
Test Strategy: test data management, standard tools, integration test patterns, performance constraints
UI/UX Patterns: standard layouts, error notifications, accessibility rules, design system integration
Who this chapter is for
Most of the time, chapter 8 is primarily useful for the dev team and maintainers.
It prevents five different implementations of the same thing.
External stakeholders usually do not care about your retry policy or correlation ID format.
They might care when it explains system guarantees (auditability, safety, recovery time), or when they want inspiration as your team is the shining example sharing their awesome implementation in their arc42 document. 💎😉
Chapter 8 vs. Chapter 9: Concepts vs. decisions
A common question: when does something belong in chapter 8 versus chapter 9 (ADRs)?
The boundary is clearer than it first appears:
Chapter 8 documents how we do X consistently: the pattern, the practice, the implementation standard.
Chapter 9 documents why we chose X over Y: the decision, the alternatives considered, the trade-offs, and the context that made the choice make sense.
They work together:
The ADR explains the choice and constraints.
The concept explains how to implement it correctly and where it shows up.
Linking them:
Always cross-reference. The concept should link to the ADR. The ADR should link to the concept.
Tip
If you document a concept without a decision, that is fine, many concepts emerge gradually.
If you document a decision without a concept to implement it, that might be a signal the decision is planned but not yet implemented.
Aggregates, entities, and the shared domain vocabulary
In many systems, there are a few domain concepts that show up everywhere:
work orders, customers, assets, cases, incidents, whatever your core “things” are.
When those concepts apply across the whole application, I document their aggregate boundaries and entity responsibilities in chapter 8.
Not because chapter 8 is a domain chapter, but because these definitions act like a shared rulebook.
This helps in three places:
It keeps chapter 5 focused on structure, not repeating the same domain definitions per building block.
It keeps chapter 6 readable, because scenarios can reference “WorkOrder” and everyone knows what that means.
It reduces accidental coupling, because aggregate boundaries become explicit.
What I put here is deliberately lightweight:
Aggregate name and purpose
What it owns (entities, value objects)
Key invariants (rules that must always hold)
State transitions and lifecycle notes
Identity and scoping rules (IDs, tenant/site boundaries)
Events published or important integration touch points (high level)
If you need a full data dictionary or complete schema documentation, do not force it into this chapter.
Link to a domain model reference, or split it into a separate document and keep chapter 8 as the “shared rules” summary.
Tip
While documenting these core terms, check if they are already in the glossary (chapter 12).
If a term is strictly structural, keep it here. If it is business language used by stakeholders, ensure it lands in chapter 12 too.
How to keep chapter 8 from becoming a junk drawer
This chapter is vulnerable to entropy.
Everything is a “concept” if you stare at it long enough.
A few guardrails that help:
Prefer “rules + rationale” over “technology lists”.
Keep each concept small:
what it is
why it exists
how to implement it
how to test or verify it
where it shows up (links to scenarios, building blocks, ADRs)
If a section becomes a wall of text, split it:
move low-level specifics into a code-linked doc and keep chapter 8 as the overview.
When a concept evolves, document both the current standard and the migration path.
Mark old approaches explicitly as “legacy” or “deprecated” with a timeline, and link to the ADR (chapter 9) that explains why it changed.
This prevents new code from following outdated patterns while giving teams visibility into what they need to update.
The minimum viable version
If you are short on time, aim for:
3–6 concepts that either:
already affect multiple parts of the system, or
are patterns you want future work to follow (even if they currently apply once)
For each concept, include:
a short description
the key rule(s)
where it shows up (links)
one or two implementation notes that prevent mistakes
Do not force a rigid template on every concept.
Some concepts need a rules section, some need a diagram, some need one paragraph and a link.
Consistency helps, but clarity helps more.
Example (Pitstop)
Pitstop is my small demo system for this series.
It is intentionally simple, so the documentation stays shareable.
Below are four concept examples that make chapter 6 easier to read,
and make chapter 7 configuration feel meaningful instead of arbitrary.
8.1 Identity and access (RBAC)
Pitstop uses role-based access control (RBAC) to keep workshop actions safe and auditable.
The UI can hide buttons, but the server enforces authorization. The UI is not a security boundary.
Rules
Every endpoint that reads or changes work orders requires an explicit policy.
Authorization is validated server-side for both HTTP and real-time actions.
Claims include a garageId to scope access per site.
Implementation
Auth: JWT bearer tokens.
Authorization: policy-based checks, mapped from roles and claims.
Claims (example)
role: Mechanic, Foreman, ServiceAdvisor
garageId: used for tenant or site scoping
permissions: optional fine-grained list for exceptions (for example discount approval)
Where it shows up
Scenarios: status updates and overrides in chapter 6.
Deployment: token validation settings and identity provider wiring in chapter 7.
8.2 Work order (aggregate / domain model)
The work order is the central aggregate in Pitstop.
Every module, scenario, and UI revolves around it.
Documenting it here gives the whole team a shared definition to build against.
Aggregate boundary
A work order owns its tasks, status, notes, and parts dependencies.
It does not own the appointment (that belongs to the planning service) or the customer record.
Lifecycle (state machine)
Only forward transitions are allowed by default.
WaitingForParts ↔ InProgress can toggle when parts arrive or a new dependency is found.
A Foreman can force-transition to any state (override).
Key invariants
A work order always has exactly one active status.
Status changes are audited (who/when/why, see concept 8.4).
Identity: WO-{sequence}, scoped by garageId.
Where it shows up
Building blocks: Work Order Module in chapter 5.
Scenarios: every chapter 6 scenario references work order state.
Audit: status changes feed the audit log (concept 8.4).
If Pitstop:ConnectivityMode = OfflineFirst, the UI queues first and sends async.
If OnlineFirst, the UI sends immediately and queues only on failure.
The meaning of ConnectivityMode is documented here.
Where it is configured (env vars, config files) is documented in chapter 7.
Where it shows up
Scenarios: status update flows in chapter 6.
Deployment: the ConnectivityMode setting in chapter 7.
8.4 Observability
Every request and event carries a correlationId so ops can trace a flow end-to-end.
Logs are structured (JSON), and a small set of metrics drives the alerting that lets ops sleep.
Rules
Every log entry includes correlationId, workOrderId (when applicable), and garageId.
Metrics are kept small and actionable:
sync_queue_depth: are outbound updates piling up?
status_update_latency_ms (p95): is the workshop experience degrading?
ws_connected_clients: are workshops connected?
Alert example: sync_queue_depth > 100 for 10 minutes → vendor down or credentials broken.
Where it shows up
Scenarios: every chapter 6 flow carries correlationId.
Deployment: log sink and dashboard configuration in chapter 7.
To browse the full Pitstop arc42 sample, see my GitHub Gist.
Common mistakes I see (and made myself)
Treating “cross-cutting” as a hard gate
Even if you are documenting something small like a microservice, service-wide concepts are still useful.
The chapter title does not need to police you.
Rename the chapter to “Concepts” if that helps, but do not skip it just because you think “cross-cutting” means “multi-service”.
Waiting until a pattern appears everywhere
If you already know a rule should become standard, document it early.
That is how you steer future work. Arc42 can start at the drawing table, even without any line of code written yet.
Turning chapter 8 into a dump
A list of random libraries is not a concept chapter.
Prefer rules, rationale, and where it shows up. Future team members or maintainers should be able to read this chapter and understand the key patterns without needing to read every line of it.
Repeating concept explanations in every scenario
If you notice that chapter 6 starts to contain the same text multiple times, move it here and link to it.
No link back to reality
If a concept never shows up in code, runtime scenarios, or configuration, it is probably planned but not yet implemented.
That is fine, but mark it clearly and revisit it. Maybe new insights have emerged and it is no longer the right pattern.
Done-when checklist
🔲 The chapter contains concepts that are reused or intended to be reused over time. 🔲 Each concept includes at least one actionable rule, not only a description. 🔲 Concepts link to where they show up (chapters 5, 6, 7, and later ADRs in chapter 9). 🔲 The chapter helps keep runtime scenarios lean by avoiding repeated explanations. 🔲 A maintainer can implement a new feature without reinventing logging, retries, idempotency, or authorization.
Next improvements backlog
Add links to code locations when concepts map cleanly to modules or packages.
Add verification notes for concepts that can fail in production (dashboards, alerts, runbooks).
Add concept-level configuration tables only for settings that change behavior significantly.
Split large concepts into “overview here, details in a linked doc” when they grow.
Wrap-up
Chapter 8 is where I capture the reusable solutions that make the rest of the document cheaper to maintain.
It keeps the domain code focused, and it keeps chapter 6 readable.
Next up: arc42 chapter 9, “Architectural decisions”, where we record all the decisions that we made along the way.
This post is about chapter 7: Deployment view, the last chapter in the "How is it built and how does it run" group. Chapter 7 answers: where do your building blocks run, in which environments, and with which settings? This chapter turns "it works on my machine" from tribal knowledge into shared documentation. No more guessing which settings matter or where things actually run.
This post is about chapter 7: Deployment view,
the last chapter in the “How is it built and how does it run” group.
Small milestone: chapter 7 means we are past the halfway point of the 12 arc42 chapters.
Chapter 5 gave us the map (building blocks).
Chapter 6 showed how those blocks collaborate at runtime.
Chapter 7 answers the next question: where do those blocks run, in which environments, and with which settings?
Note
This chapter turns “it works on my machine” from tribal knowledge into shared documentation.
No more guessing which settings matter or where things actually run.
Also: “my machine” can be a perfectly valid environment.
If onboarding and local dev matter, document that setup as a real deployment variant.
What belongs in chapter 7 (and what does not)
The main job of chapter 7 of an arc42 document is to answer:
Where does the system run, how is it wired, and what needs to be configured to make it behave correctly?
What belongs here:
A deployment overview of nodes, environments, and their connections.
Think: hosts, clusters, networks, segments, and the paths between them.
A mapping from building blocks to infrastructure:
which blocks run where, and which ones are “shared” vs “per environment/per site”.
The runtime configuration that is required to run the system and that changes behavior:
environment variables, config files, feature flags, connection strings, default values, and required secrets (at least at a reference level).
Operational concerns that affect how the system holds up:
what scales, what is isolated, what happens when a node goes down.
Trust boundaries and data classification:
which networks are public vs. private, and where sensitive data is allowed to live.
Persistence strategy:
especially for containerized setups, explicitly state where state lives (volumes, managed databases) and if it is backed up.
Quality and/or performance features of the infrastructure when they matter:
expected throughput, latency constraints, availability targets, bandwidth limitations, or time synchronization (NTP) requirements.
Links to deployment assets when they are the source of truth:
Dockerfiles, Helm charts, Kustomize overlays, Terraform/Bicep/ARM, compose files, install scripts, etc.
If your base image choice matters (for size, security, or compliance), add a short note on why.
Tip
If you have full Infrastructure as Code (IaC), Chapter 7 is the map; your Terraform or Bicep is the construction crew.
Do not duplicate every setting from your IaC here. Instead, explain the topology that the code creates.
What does not belong here:
A full re-explanation of your building blocks or domain responsibilities.
This chapter is about placement and wiring, not reintroducing the system.
Detailed runtime scenarios (“and then it calls X”) unless the scenario is specifically about deployment behavior
(e.g., failover sequence, blue-green switch, cold start, disaster recovery).
Interface payload catalogs and protocol specs.
Link to where contracts live, and keep chapter 7 focused on infrastructure and configuration.
A giant unstructured dump of “every setting we ever had” without context.
Configuration belongs here, but it needs a structure: defaults, required vs optional, and what it influences.
Where to document configuration
Note
Strictly speaking, arc42 does not prescribe a configuration section in chapter 7.
The template typically places the what of a setting (meaning, default, contract) in chapter 5 (building block interfaces)
and the how (override strategy, config patterns) in chapter 8 (crosscutting concepts).
Chapter 7 itself only covers the where: which node, which manifest, which secret store.
I prefer to consolidate configuration in chapter 7.
When a newcomer asks “what do I need to configure to make this run?”,
I want the answer to be in one place, right next to the infrastructure it runs on.
Splitting it across chapters 5, 7, and 8 is structurally clean but practically hard to navigate.
If you have a separate place where configuration is documented (runbook, ops handbook, generated config reference),
link to it and keep chapter 7 as the map.
If you do not have a separate configuration reference, chapter 7 is a practical home for it:
everything needed to make the application run, and everything that changes behavior per environment.
That usually includes:
environment variables and configuration keys
config files and naming conventions (appsettings.{Environment}.json, .env, mounted files, etc.)
default values
“required in production” vs “optional”
where it is set (deployment manifest, secret store, CI variables)
what it impacts (behavior, performance, safety, compliance)
Warning
Never document actual secret values (API keys, passwords, connection strings) in this chapter.
Only document the names of the secrets or which vault they live in.
If I see a potential password in a markdown file, I will find you! 👮♂️😉
A practical structure that stays readable:
a short “configuration model” section (how config is loaded/overridden)
a table of key settings (only the ones that matter)
links to the “full reference” if/when you have one
The minimum viable version
If you are short on time, aim for this:
One main deployment diagram for the most important environment (often production-like).
A short mapping table: which building blocks run where.
A small “runtime configuration” section:
the 5–15 settings that decide behavior, plus where they live.
That is already enough to stop most “but it worked yesterday” surprises.
Copy/paste structure (Markdown skeleton)
Sections 7.1 and 7.2 follow arc42’s infrastructure levels.
I add a dedicated configuration section per infrastructure element;
arc42 would split that across chapters 5 and 8 (see the note above).
For multiple environments, arc42 suggests copying the 7.1 structure.
Keep it small and add depth only when it matters.
Pitstop is my small demo system for this series.
It is intentionally simple, so the documentation stays shareable.
This is what chapter 7 looks like when filled in.
7. Deployment view
7.1 Infrastructure Level 1 - Single garage
Situation: one garage runs Pitstop on-prem on a single Docker host;
the UIs and backend run as containers next to the database.
Motivation
Small garages need a self-contained setup that works on a single machine
without external dependencies. All components share one host to keep
operations simple.
Mapping
Building block
Runs on
Notes
Workshop Management Service
Docker container
Main backend
Customer Management Service
Docker container
Shares host with backend
Pitstop UI
Docker container
Served via reverse proxy
SQL Database
Docker container
Persistent volume on host
Message Broker
Docker container
RabbitMQ, single node
7.1 Infrastructure Level 1 - Multi-site
Situation: a garage chain wants central reporting and audit,
but each site needs fast workshop responsiveness even with shaky connectivity.
Central DB + audit store; site-level caches for workshop responsiveness.
Reporting can run off read replicas.
Motivation
Garage chains need central reporting and audit, but the workshop still
needs to feel fast locally. Sites must keep working even when
connectivity to the central system is unreliable.
Security: network segmentation; outbound allowlist to planning/PSP endpoints.
7.2 Infrastructure Level 2
7.2.1 Docker host
Single Linux host running Docker Engine.
All Pitstop containers and the database run here.
Images are built with multi-stage Dockerfiles to keep the final image small and free of build tooling.
Configuration
Pitstop behavior differs per garage and network reliability.
These settings are owned by Ops and injected via container environment variables
or mounted config files.
Key setting: ConnectivityMode
OnlineFirst (default): normal operation, real-time updates preferred
container env var Pitstop__ConnectivityMode or appsettings.{Environment}.json
appsettings.Production.json
{
"Pitstop": {
"ConnectivityMode": "OfflineFirst",
"Realtime": {
"Transport": "WebSocket",
"FallbackToPollingSeconds": 5
},
"Sync": {
"RetryPolicy": "ExponentialBackoff",
"MaxRetries": 10
}
}
}
To browse the full Pitstop arc42 sample, see my GitHub Gist.
Common mistakes I see (and made myself)
Only drawing “prod” and ignoring “dev”
If local dev and onboarding matter, treat them as a real deployment variant.
It does not have to be pretty, it has to be accurate.
Mixing behavior and placement
Chapter 7 is where things run and how they connect.
Behavior generally belongs in runtime scenarios (chapter 6).
Deployment-driven behavior (failover, DR, scaling) should be documented either here (when it depends on topology/environment) or in chapter 8 (when it is a reusable concept/pattern).
Configuration without structure
A thousand keys in a wall of text is not documentation, it is punishment.
Group by domain/feature, document defaults, and call out which values change behavior.
Forgetting operational boundaries
Who owns what? Which node is “managed by ops” vs “managed by the team”?
Which dependencies are inside your control, which are not?
No traceability to building blocks
If readers cannot map a box in the diagram back to a building block from chapter 5,
the deployment view becomes “a nice picture” instead of a useful model.
Done-when checklist
🔲 The main environments/variants are described (or explicitly out of scope). 🔲 Building blocks are mapped to nodes/locations. 🔲 Key runtime configuration is documented: defaults, where set, and what it changes. 🔲 Operational concerns are at least acknowledged (monitoring, backups, security boundaries). 🔲 A newcomer can answer: “where does this run?” and “what do I need to configure?”
Next improvements backlog
Add an explicit mapping table for each relevant environment or variant.
Link the actual deployment assets (Dockerfile, Helm, Terraform, compose) where appropriate.
Add a small “secrets and trust boundaries” note (what must be protected, where it lives).
Add operational SLO/SLA expectations if availability and latency are key goals.
Wrap-up
Chapter 7 is the reality check: the system, placed on real infrastructure with real constraints.
It is where “works in my head” becomes “works in an environment”.
With chapter 7 done, the full “How is it built and how does it run” group is complete.
Next up is the “Reusables, decisions, and qualities” group, starting with arc42 chapter 8, “Concepts”,
where we document the reusable cross-cutting ideas (auth, logging, error handling) without duplicating them in every scenario.
Chapter 6 describes runtime behavior: how building blocks collaborate in the scenarios that matter, including alternatives, exceptions, and the bits that tend to hurt in production. It is also the third chapter in the "How is it built and how does it run" group.
In this article I show what belongs in chapter 6, what to keep out, a flexible structure you can copy, plus a small example from Pitstop.
This post is about chapter 6: Runtime view,
the third chapter in the “How is it built and how does it run” group.
Chapter 5 gave us the map (building blocks and responsibilities).
Chapter 6 shows how that map is used in real life: who talks to whom, in what order, and why.
The arc42 template keeps this chapter intentionally “empty” by default.
It is basically a container for scenarios: one subchapter per runtime process you want to document.
Note
Chapter 6 can grow a lot. That is not a smell.
If users and external neighbors interact with your system, those flows are architecture.
How does the system behave at runtime in the scenarios that matter?
What belongs here:
All relevant runtime scenarios where there is meaningful interaction:
user interactions that change state or trigger workflows
integrations with external neighbors (inbound/outbound)
operationally important processes (batch runs, scheduled jobs, import/export)
flows that embody key quality goals (latency, availability, auditability, resilience)
For each scenario: the collaboration between the building blocks (names consistent with chapter 5).
Alternatives and exceptions where they exist:
timeouts, retries, idempotency, partial failures, degraded/offline behavior, manual fallbacks.
Notes that help people reason about runtime behavior:
correlation IDs, observability points, ordering guarantees, consistency expectations.
Tip
If a neighbor appears in the context view (chapter 3), try to let it show up in at least one runtime scenario over time.
If it never appears, that is useful feedback: maybe it is not a real neighbor, maybe it is background data, or maybe the relevant scenario is still missing.
Either way, treat it as a prompt to revisit the context view in your next iteration.
What does not belong here:
Long descriptions of static responsibilities and decomposition.
This chapter is about collaboration over time, not “what exists”.
A full contract catalog or protocol reference.
Link to specs where they live; keep this chapter focused on behavior and responsibilities.
Environment-specific deployment details.
The runtime behavior should still make sense even if you deploy differently.
Low-value diagram noise:
repeating “return payload” on every arrow when nothing is transformed,
or expanding every internal hop when it adds no architectural insight.
Cross-cutting flows that are the same everywhere, such as the OAuth/OIDC login flow.
That belongs in chapter 8 as a reusable concept (unless you are literally building an auth service 😅).
Note
Runtime view is where architecture stops being a set of boxes
and becomes a set of promises: “this happens”, “this must not happen”, “this is how we recover”.
Diagrams (and how to keep them readable)
Sequence diagrams for sequential flows
Sequence diagrams are excellent at showing who talks to whom, in what order, and why it matters.
Focus on what changes to keep diagrams readable:
Request/response pairs
Show them only when data transforms or meaning shifts.
Skip the “return OK” arrows that just echo back what was sent.
Internal hops
Compress them when a layer simply passes data through without adding architectural insight.
Three layers calling each other with identical payloads? Show it as one arrow across the boundary.
Scenario intent
Lead with it, not implementation noise.
Readers should grasp the essential flow in seconds, then dive into details if they need them.
Example trade-off
A “create order” scenario does not need you to diagram every internal service call.
Show the user action, the boundary entry point, the database write, and the response back.
Skip the middleware, logging, and validation layers unless they embody a quality goal or failure path.
BPMN for more complex flows
When scenarios have a lot of branching (alt/if/else), loops, delays, sequence diagrams can become a spaghetti scroll. 🍝
That is where BPMN often shines: it stays readable as complexity grows.
Camunda Modeler is my go-to tool.
Trade-off: BPMN is typically stored as XML, which is not fun to review in a repo.
So exporting diagrams as images becomes an extra step.
Just keep the source file and the exported image together.
Tip
You do not need to pick one diagram type for everything.
Consistency helps, but clarity helps more.
The minimum viable version
If you are short on time, aim for this:
Start with 1–3 scenarios that cross boundaries: user → system and system → neighbor.
The first user flow, the first integration, and the first “what if it fails?” path.
Conflicting edits → “last-write-wins” only for safe fields; status changes may require rules (e.g., foreman override).
To browse the full Pitstop arc42 sample, see my GitHub Gist.
Common mistakes I see (and made myself)
Only documenting the happy path
Architecture shows up in failure handling, retries, timeouts, and recovery.
Diagrams that do not match your building blocks
If names and boundaries differ from chapter 5, readers lose their mental model.
Diagram noise instead of insight
Do not waste pixels on repetitive returns and unchanged payloads.
Compress internal hops when they add no architectural value.
User → API → Service → Repository → DB → Repository → Service → API → User
User → API → Domain write (DB) → API → User
Avoiding “big” because it might become big
Documenting 27 scenarios is not wrong.
It becomes wrong when nobody can find anything.
Group them, index them, or split into linked documents when the system is large.
No external stakeholder recognition
If a neighbor/system owner cannot recognize their part in the flow,
you probably did not document it clearly enough.
Done-when checklist
🔲 Relevant user and neighbor interactions are covered (or explicitly postponed). 🔲 Scenario diagrams use building block names consistent with chapter 5. 🔲 Key alternatives/exceptions are documented where they matter. 🔲 The chapter is navigable: scenarios are grouped and titled clearly. 🔲 Readers can explain “what happens when…” without guessing.
Next improvements backlog
Add scenarios for remaining neighbors until the context view is “explained by runtime”.
Add idempotency/retry rules for integrations that can duplicate messages.
Add observability notes per scenario (correlation IDs, key logs/metrics).
Split scenarios into separate linked arc42 documents if the system warrants it.
Wrap-up
Chapter 6 is where your architecture becomes visible in motion.
Chapter 5 turns strategy into structure using white-box decomposition. It describes the static building blocks of your system, their responsibilities, and the most important dependencies, without diving into runtime flows.
Learn what belongs in chapter 5, what to keep out, and get a copy/paste template plus a real example from Pitstop.
This post is about chapter 5: Building block view,
the second chapter in the “How is it built and how does it run” group.
Chapter 4 set direction; chapter 5 makes it tangible.
Here you describe the static structure of the system: the building blocks, what each one is responsible for,
and which dependencies matter.
The goal is not to document everything.
The goal is to give readers a mental map of the solution, so changes and discussions stop happening “in someone’s head”.
Note
This post is longer because chapter 5 introduces hierarchical decomposition (a.k.a. zooming in step by step):
start small, and only add detail when it prevents real misunderstandings.
What belongs in chapter 5 (and what does not)
The main job of chapter 5 of an arc42 document is to answer:
What are the main parts of the system, and what is each part responsible for?
What belongs here:
A building block hierarchy (level 1–3), from coarse to detailed.
Per building block:
responsibility (one sentence)
key dependencies
main interfaces (what it offers/needs)
The boundaries that matter: ownership, responsibilities, and “who is allowed to change what”.
When multiple teams work on the same system, building block boundaries often align with team ownership.
If changing a block requires coordination with another team, that boundary is worth documenting.
The structural consequences of your strategy from chapter 4
(e.g., modular monolith vs distributed, having a BFF, etc.).
Links to source code or generated docs when that helps (if building blocks map to modules/packages/repos).
What does not belong here:
Copy/pasting large parts of earlier chapters.
Refer back to the goals, constraints, and context when you need them,
but keep this chapter focused on responsibilities and boundaries.
Step-by-step flows, sequencing, or “and then it calls X” stories.
This chapter is about static structure, not behavior.
Environment-specific deployment and infrastructure details.
Keep those concerns separate so the building block view stays stable even when environments change.
Full interface specifications and contract catalogs.
You can link to OpenAPI/AsyncAPI or other specs, but avoid duplicating payloads and edge cases here.
Low-level implementation decisions that change frequently.
If it is likely to flip during sprints (a library choice, an internal pattern tweak),
it does not belong in the core structure.
The “white-box” metaphor
The core concept of this chapter is the black-box vs. white-box approach.
Chapter 3 was the black-box view: The system is a sealed opaque block.
We only described what crosses the boundary (interfaces) and who sits outside (neighbors),
but internals were invisible (hence “black” or opaque).
Chapter 5 is the white-box series:
We “open the lid” of the system. We look inside to see how it is constructed.
Level 1 opens the main black box. If a component inside Level 1 is complex,
we treat that component as a black box first, then open it up in Level 2 (its white-box view).
This hierarchical decomposition is standard in arc42 and aligns with the C4 Model “Zoom” concept.
Levels mean different things in different documents
First, a blunt disclaimer: The building block levels are a zoom tool, not a fixed taxonomy.
You stop decomposing when you can no longer explain why the detail matters to your architectural goals.
What “level 1–3” means depends on what you are documenting:
For a large system, level 1 might be products, level 2 domains/services, and level 3 microservices.
For a single (micro)service, level 1 might be the service boundary, level 2 internal modules, and level 3 namespaces.
For a platform/library team, level 2 might describe public APIs or even classes,
because that is what stakeholders integrate with,
and level 3 might be implementation details that only the owning-team needs to understand.
Tip
Pick the level of detail that matches your stakeholders.
A diagram is successful when it answers their questions, not when it contains more boxes.
Level 1 should match chapter 3
Level 1 is where you show the system boundary and the neighbors.
It should include the same external neighbors you introduced in chapter 3.
Warning
Do not confuse context with building blocks.
Chapter 3: Who/what is outside, and what crosses the boundary?
Chapter 5 Level 1: What are the main internal building blocks,
and how do they depend on each other and their connection to the external neighbors?
That creates a nice “thread” through the document:
chapter 5: how we are structured to deal with that
chapter 6: how the collaboration plays out at runtime (spoiler alert! 🫣)
Do not repeat interface details on every level
Interfaces show up on multiple levels, but you do not have to repeat everything.
Repeating payloads and contracts at every zoom level creates noise and maintenance debt.
A practical rule:
Level 1: Name the interactions (e.g., “Appointments”, “Status Updates”) so the relationship is clear.
Level 2/3: Document the interface where the contract lives (e.g., in the integration module or port)
and link to the source/spec.
When you are describing interfaces on a level, it could be helpful to separate them into:
If building blocks map cleanly to code, link them.
Some teams generate docs straight from source (Doxygen-style or similar),
which can make this chapter accurate and cheap to maintain.
Example (Pitstop)
Pitstop is my small demo system for this series.
It is intentionally simple, so the documentation stays shareable.
This is what chapter 5 looks like when filled in.
5. Building block view
5.1 White-box overall system
Building blocks (level 1)
Block
Responsibility
Key Interfaces
Admin Overview UI
Dashboard, coordination, customer comms support
HTTPS/JSON to Backend
Workshop View UI
Bay/task board, fast updates, degraded mode
WebSocket/JSON to Backend
Backend
Core domain + APIs + orchestration
HTTPS/JSON + WS + internal module interfaces
Sync & Integration
Mapping + sync strategy per planning vendor
REST/JSON, webhooks, retry
Audit/Event Log
Immutable history for accountability + analytics
Append/read APIs
DB
Operational persistence
SQL (implementation-specific)
5.2 Level 2 - Pitstop Backend
Notes
Modules contain domain rules.
The Integration Ports (a ports-and-adapters pattern, as chosen in chapter 4)
isolate vendor protocols and mapping, so domain modules do not depend on external systems directly.
Reporting read models can be optimized independently (avoid OLTP pain).
Building blocks (level 2)
Element
Responsibility
Depends on
Work Order Module
Core logic for orders
Customer, Audit
Workshop Module
Mechanic task management
WorkOrders, Audit
Admin Module
Configuration & overrides
Audit
Customer/Vehicle Module
Shared entity data
Audit
Reporting
Read-optimized views
(Domain Events)
Planning Port
Adapter for Planning Service
External
Notification Port
Adapter for Notification Service
External
Audit Writer
Centralized compliance logging
DB
API Layer
Protocol handling (HTTP/WS)
Auth, Modules
To browse the full Pitstop arc42 sample, see my GitHub Gist.
Note
A level 3 zoom into the Work Order Module could show its internal structure
(e.g., command handlers, domain entities, validation rules) if stakeholders need that detail.
For brevity, we leave it out here.
Common mistakes I see (and made myself)
Too much detail too early
If chapter 5 looks like a class diagram, it will not be maintained.
Start coarse, and zoom in only where complexity justifies it.
Building blocks without responsibilities
Boxes called Service and Manager are not responsibilities.
Each block should say what it owns: persistence, state transitions, messaging, integrations, etc.
Mismatch with chapter 3
If chapter 3 lists neighbors, level 1 should show them. As you document the white-box,
you might find a specific module that talks to an external system you forgot to list in chapter 3.
Consistency goes both ways!
Repeating interface specs everywhere
Do not duplicate protocol and payload details on every level.
Put the detail where it makes sense (often chapter 3) and link to it.
Forgetting “source of truth”
For important data: who owns it, and who is allowed to change it?
If you do not answer this, production will answer it for you.
Using technology names as architecture Kafka and PostgreSQL are implementation choices.
Building blocks should describe responsibilities (message bus, persistence, state, integrations),
so your diagrams remain useful when technology or deployment changes.
Done-when checklist
🔲 Level 1 includes the system boundary and the neighbors from chapter 3. 🔲 Each building block has a clear responsibility in one sentence. 🔲 External interfaces are referenced (and not duplicated) where documented. 🔲 Level 2/3 are used only when complexity or stakeholders require it. 🔲 A new team member can explain “what lives where” after reading this chapter.
Next improvements backlog
Add ADR links when boundaries or decomposition are disputed (chapter 9).
Add level 3 only for a few areas where deeper detail prevents misunderstandings.
Add links to code/docs where building blocks map cleanly to modules or repos.
Wrap-up
Chapter 5 is the map. 🗺️
It helps people find responsibilities, boundaries, and where to implement changes.
Next up: arc42 chapter 6, “Runtime view”, where we put this structure in motion and describe the most important end-to-end flows.
Chapter 4 opens the "How is it built and how does it run" group. It is where goals, constraints, and context from the first three chapters start to shape the design through a small set of guiding decisions.
In this article I show what belongs in chapter 4, what to keep out, how to handle open strategy questions, and a flexible structure you can copy, plus a small example from Pitstop.
This post opens the “How is it built and how does it run” group.
The first three chapters can feel like silos: each one introduces its own set of information.
Here, those inputs start to shape the design. This is where you set direction for the solution.
Your solution strategy should fit the goals from chapter 1,
operate within the non-negotiables from chapter 2,
and respect the boundaries and partners from chapter 3.
Early in a project this chapter can be short. That is normal.
The strategy grows as you learn, as constraints become concrete, and as decisions are made.
What belongs in chapter 4 (and what does not)
Chapter 4 of an arc42 document answers one question:
What is our approach, and which major decisions guide the design?
What belongs here:
A short list of guiding choices that shape the whole design.
For each choice a short rationale: why this direction fits the goals, constraints, and context.
The “heavy” decisions that should not change every sprint:
Major platform choices, integration strategy, monolith vs distributed, data approach, deployment style.
Trade-offs and rationale, linked back to earlier chapters where possible.
Consequences (direction and impact), so people understand what follows from the strategy.
Links to ADRs when they exist (chapter 9).
If your list grows over time, group the strategy items into a few buckets that fit your scope
(pick what matches your system), for example:
Detailed breakdowns of internal parts and their dependencies.
Step-by-step interaction flows or scenario descriptions.
Environment-specific operational details.
Small, sprint-level technical choices that are likely to change often.
Copy/pasting earlier chapters: link to the drivers instead and focus on what you decided and what it implies.
Note
Strategy is not the same as “technology list”.
A good strategy explains why a direction makes sense and what it implies.
This chapter often starts almost empty
Early in the design process, chapter 4 can be short.
That is normal.
As the design and build progresses, this chapter becomes the place where everything starts to connect:
quality goals, constraints, concepts, deployment choices, and runtime behavior.
If a strategy item is negotiable, keep it lightweight.
If it is truly a “heavy” direction setter, make sure it is backed by a constraint, a discussion, or an ADR.
Tip
Chapter 4 is also a good place to list open strategy questions that still need a decision.
A visible list of unknowns is more useful than pretending everything is decided.
The minimum viable version
If you are short on time, aim for a small set of strategy statements as concise bullets with just enough context to steer design.
A good “minimum viable” strategy statement usually contains:
Approach / decision (one line)
Rationale (one or two short lines: why this direction)
Consequence / impact (one short line: what this enables or constrains)
You do not need to hit an exact number of lines, you can combine them in a readable way.
The key is that the rationale and impact are clear and concise,
and that it is easy to see how the choice connects back to the drivers.
Copy/paste structure (Markdown skeleton)
Use this as a starting point and keep it small.
04-solution-strategy.md
## 4. Solution strategy
<1–3 short paragraphs: what is the overall approach and why?>
Strategy statements should be short.
If you need a full page to explain one item, you probably want to split details into another chapter and link to it.
Tip
Where you put open questions depends on how you work.
If your process is strategy-driven (pick direction first, then refine), keeping them in chapter 4 works well.
If your process is more risk-driven (track uncertainties and mitigation first),
you might prefer chapter 11 and link to them from here.
Example (Pitstop)
Pitstop is my small demo system for this series.
It is intentionally simple, so the documentation stays shareable.
This is what chapter 4 looks like when filled in.
4. Solution strategy
Pitstop is designed as an operational source of truth for work orders and status,
with near real-time synchronization between planning and workshop execution.
Modular monolith backend (initially)
Keep deployment simple and change-friendly while the domain stabilizes.
Modules are strict (no “grab-bag services”) and communicate via explicit interfaces.
Adapter-based integrations (Planning, Notifications, Parts status)
Each external system sits behind a port/adapter boundary to protect domain logic
and keep new integrations fast.
Traces to: Modifiability goal (≤ 2 days), Planning integration constraint.
Near real-time updates via push
Workshop and admin need shared truth quickly (≤ 2 seconds).
Use WebSocket/SSE where possible; fall back to efficient polling.
Traces to: Consistency goal, near real-time constraint.
Degraded-mode workshop operation
Workshop UI supports local queueing and later sync when connectivity returns.
Traces to: Resilience goal, degraded-mode constraint.
Audit-first changes for work order state
Every status change and important edits record who/when/why (immutable history),
enabling dispute resolution and throughput analysis.
Open strategy questions
Question: WebSocket vs SSE as the default push channel?
Affects real-time UX and infra constraints. Validate with UI needs + ops constraints.
Question: What conflict resolution approach do we use after offline edits?
Affects user trust and operational continuity. Define business rules with workshop stakeholders.
To browse the full Pitstop arc42 sample, see my GitHub Gist.
Common mistakes I see (and made myself)
No strategy statements
If chapter 4 is empty or just a placeholder, the architecture lacks direction.
Without strategy, designs drift and teams lose alignment.
Repeating the earlier chapters instead of linking
Chapter 4 should build on chapters 1, 2, and 3, not copy them.
Use links and focus on the consequences.
Only listing technologies We use Kubernetes is not a strategy. We deploy as containers because ops standardizes on it is.
No rationale
Without rationale, strategy statements look like preferences.
Tie each item back to a goal, constraint, or context boundary.
Treating consequences as a negative
Consequences are direction.
If a choice does not enable anything valuable for stakeholders, it is a smell.
Making it too detailed
Chapter 4 should be readable in a few minutes.
Details belong in other chapters and ADRs.
Hiding unknowns
If open questions only live in someone’s head, the team cannot contribute.
Making assumptions explicit invites feedback and prevents silent divergence.
Done-when checklist
🔲 Contains a small set of strategy statements (not a tech wishlist). 🔲 Each statement has a short rationale and a clear impact. 🔲 Statements link back to goals/constraints/context (chapters 1, 2, 3). 🔲 The choices feel stable enough to not change every sprint. 🔲 Open strategy questions are visible (here or in chapter 11), not hidden in someone’s head.
Next improvements backlog
Review strategy statements with ops and key external stakeholders for realism.
Add links to ADRs as decisions become concrete (chapter 9).
Add a short mapping from strategy to top quality goals.
Move unstable or controversial topics into “Open strategy questions” until decided.
Remove strategies that no longer serve stakeholder value (and document the change as an ADR).
Wrap-up
Chapter 4 is where the design starts to take shape.
It should be short, directional, and connected to the drivers you already captured in the first 3 chapters.