Skip to content
/ Michaël Hompus

Chapter 10 turns quality goals into testable quality scenarios. It helps you move beyond vague words like "fast" or "secure" by describing concrete situations, expected responses, and measurable targets. ISO/IEC 25010 and Q42 can help as a structure and inspiration, but the real value is iteration: refine goals, learn from reality, and tighten scenarios over time. In this article I explain what belongs in chapter 10, what to keep out, a minimal structure you can copy, plus a small example from Pitstop.

This post is about chapter 10: Quality requirements, the third chapter in the “Reusables, decisions, and qualities” group.

Chapter 1 introduced quality goals at a high level. Chapters 8 and 9 captured patterns and decisions that often exist because of those goals. Chapter 10 is where I make qualities concrete: not as slogans, but as scenarios you can test, monitor, and verify.

One recurring problem: stakeholders and teams find it hard to write SMART quality requirements. They will say fast, robust, secure, and everyone nods. Then production teaches you that nodding is not a measurement.

What belongs in chapter 10 (and what does not)

Chapter 10 of an arc42 document answers:

Which quality requirements matter, and how do we know we meet them?

What belongs here:

  • A quality requirements overview:
    the relevant quality characteristics for your system, grouped in a structure that is easy to scan. ISO/IEC 25010 is a common choice for this grouping, and Q42 is a useful catalogue for examples.
  • A set of quality scenarios:
    situation-based, testable requirements with a stimulus, an expected response, and a metric or target. “Testable” means different things per type: validate a latency scenario with a load test or SLO alert; an auditability scenario with a timed export; a modifiability scenario by verifying the adapter boundary in a code review.
  • A clear link back to quality goals from chapter 1.
    If chapter 1 says “auditability” is a top goal, chapter 10 should make that measurable.
  • Cross-links to where quality is implemented:
    concepts (chapter 8), decisions (chapter 9), and sometimes deployment constraints (chapter 7).

What does not belong here:

  • A technology shopping list.
    ”Kafka” is not a quality requirement, it is a potential solution.
  • Purely functional requirements and business workflows.
    Those belong in use cases, building blocks (chapter 5), and runtime scenarios (chapter 6).
  • Only vague adjectives.
    ”fast” and “secure” are direction, not requirements. Chapter 10 is where you turn them into something you can validate.

Tip

If you cannot imagine a test, a metric, or an operational check for a statement, it probably belongs in chapter 1 as a goal, not in chapter 10 as a requirement.

Why is quality so late in arc42?

It can feel strange that quality scenarios show up this far back in the arc42 structure. It can look like quality is an afterthought. It is not.

This is how I explain it:

  • Quality goals are up front because they drive direction.
  • Quality scenarios are later because they need context to be meaningful.
  • The document is iterative: you refine goals, you make choices, you learn, you tighten scenarios.

In other words, chapter 10 benefits from having chapters 57 in place. A scenario like “p95 status update latency is ≤ 2s” only makes sense when you know what “status update” is, which building blocks collaborate, and where the system actually runs.

Note

Verification often happens late because reality arrives late. The trick is to still let quality drive your work early, then use chapter 10 to sharpen the targets as you learn.

A structure that helps when people struggle with SMART qualities

If your stakeholders struggle with SMART wording, do not fight them with a blank page. Give them a ladder:

  • Start with a quality tree to agree on vocabulary.
  • Add a short overview per quality area: what matters and what does not.
  • Convert the important items into scenarios with measurable targets.

Two helpful sources for vocabulary and inspiration:

  • ISO/IEC 25010:2023 gives you a familiar top-level structure.
  • Q42 is a companion project by the arc42 team. It gives you a large catalogue of quality characteristics with descriptions and example requirements you can adapt.

Use them as scaffolding, not as a checklist.

Quality tree diagram

A quality tree is a visual overview of which quality characteristics apply to your system. It works like a map: it shows the landscape at a glance, so you can decide where to focus.

It is useful because it makes trade-offs visible. When you can see all quality areas together, it becomes easier to say “this matters more than that”, and to explain that choice to others. It also prevents the “everything is important” trap: when everything is marked as a top priority, that is the same as having no priorities at all.

Quality tree diagram

Note

Most systems use a subset of the tree, not all branches. The goal is clarity, not purity.
It is fine to add system-specific categories such as auditability or data minimization.

The minimum viable version

If you are short on time, aim for this:

  1. A small quality overview, grouped by ISO/IEC 25010:2023 headings.
    (or your own headings if that reads better).
  2. Pick 3–6 top items and write quality scenarios for them.
  3. For each scenario, add a metric target you can validate later.

That is enough to stop quality from being a vibe.

Copy/paste structure (Markdown skeleton)

Use this as a starting point.

10-quality-requirements.md
## 10. Quality requirements
<Short intro: why quality matters for this system and how we verify it.>
### 10.1 Quality requirements overview
<Group requirements using ISO/IEC 25010:2023 headings or another clear structure.>
<Mark "nice-to-have" items explicitly.>
### 10.2 Quality scenarios
<Scenario-based, testable requirements. Keep them short and measurable.>
| Scenario | Stimulus | Response | Metric/Target |
| :------- | :------- | :------- | :------------ |
| ... | ... | ... | ... |
<Add more tables per quality theme if that improves readability.>

Note

If you already use BDD or Gherkin, the mapping is straightforward:
Given (context and preconditions),
When (stimulus),
Then (expected response and metric/target).
You can write scenarios in Gherkin and reference them here, or keep them in the table format above. Either way, the key property is the same: concrete, testable, and measurable.

Example (Pitstop)

Pitstop is my small demo system for this series. It is intentionally simple, so the documentation stays shareable.

Below is a shortened version of the Pitstop chapter 10. It shows the structure without drowning you in every possible scenario. Notice how overview headings and scenario groups mark which chapter 1 top goals they address. Consistency is a Pitstop-specific quality area that does not map to a single ISO/IEC 25010:2023 category.

10. Quality requirements

10.1 Quality requirements overview

Quality tree diagram

Reliability (top goal: Resilience)

  • Degraded-mode operation for workshop during flaky internet.
  • Sync backlog does not block workshop core operations.

Consistency (top goal: Consistency)

  • Status updates visible across all UIs within seconds.
  • Idempotent handling of duplicate planning updates.

Maintainability (top goal: Modifiability)

  • Add a new planning vendor adapter without changing core work order rules.
  • Nice-to-have: automated contract tests with recorded fixtures.

Security

  • Role-based access control with site scoping via garageId.
  • Secure audit trail, prevent tampering with history.

Auditability / traceability

  • Every significant change records who, when, and why.
  • Timeline export supports disputes and compliance.

10.2 Quality scenarios

Reliability (top goal: Resilience)

ScenarioStimulusResponseMetric/Target
Wi-Fi outage15 min disconnectWorkshop continues, actions queued locally≥ 99% of actions queued without loss
ReconnectNetwork returnsQueue replays and sync completesdrained within ≤ 60s

See also: degraded mode concept and ADR-001.

Consistency (top goal: Consistency)

ScenarioStimulusResponseMetric/Target
Status visible everywhereMechanic sets WaitingForPartsAdmin and Workshop converge≤ 2s end-to-end (p95)
Duplicate vendor updatePlanning sends same appointment twiceProcessed once, idempotent0 duplicate work orders

Maintainability (top goal: Modifiability)

ScenarioStimulusResponseMetric/Target
Add planning vendorNew API and mappingAdd adapter, domain unchanged≤ 2 days, core untouched

Security

ScenarioStimulusResponseMetric/Target
Cross-garage accessUser tries other garageIdDenied100% blocked
Audit tamper attemptTry to edit historyPrevented + logged100% blocked + logged

Auditability

ScenarioStimulusResponseMetric/Target
Customer dispute”You promised 16:00”Export full timeline≤ 60s export

To browse the full Pitstop arc42 sample, see my GitHub Gist.

Common mistakes I see (and made myself)

  1. Writing only adjectives
    ”fast” is not a requirement. A scenario with a measurable target is. Make sure to talk with stakeholders what the target should be and how to verify it.

  2. Mixing requirements and solutions
    ”use Redis” is a decision, not a requirement. The requirement is something like “fast access to work order state”. If you have a decision that implements a quality requirement, write the requirement here, and link to the decision in chapter 9.

  3. No link back to goals
    If chapter 1 lists top goals, chapter 10 should make them concrete. It would be strange if chapter 1 says “consistency” is a top goal, but chapter 10 does not have any scenarios to measure it.

  4. Treating this as one-and-done
    Quality scenarios improve with iteration. Early drafts are allowed to be rough, as long as you refine them. Every time you add a scenario, building block, deployment, or decision, ask yourself if it has quality implications that should be captured here.

  5. Too many scenarios without navigation
    A large system can have many scenarios. Group them, keep titles clear, and keep tables consistent. Link to documents if you have detailed test plans or runbooks.

Done-when checklist

🔲 Quality requirements are grouped in a structure people recognize (ISO/IEC 25010 or equivalent).
🔲 Top quality goals from chapter 1 are turned into measurable scenarios.
🔲 Scenarios include a stimulus, response, and a metric or target.
🔲 At least one quality area traces back to the concept or decision that implements it.
🔲 The chapter is treated as iterative, it will be refined as the system and insights evolve.

Next improvements backlog

  • Add monitoring or test hooks for the most important scenario metrics.
  • Add scenario coverage for important external neighbors and operational jobs.
  • Tighten targets over time based on observed production baselines.
  • Add a short note per top goal on how it is validated (test, metric, runbook).

Wrap-up

Chapter 10 is where quality stops being a wish and becomes a check. When a quality trade-off is accepted, document it here: note which quality was deprioritized, which won, and link to the decision in chapter 9 that captures the reasoning. You can start with rough scenarios, then refine them as you learn.

Next up: arc42 chapter 11, “Risks and technical debt”, where we capture the things that can still bite us later, and how we keep them visible.

/ Michaël Hompus

Chapter 9 is your decision timeline. It records the important architectural choices you made along the way, so you can see what was decided, why, and which options were not picked. This chapter often starts small, but it grows as the system grows. In this article I explain what belongs in chapter 9, what to keep out, a minimal structure you can copy, plus a small example from Pitstop.

This post is about chapter 9: Architectural decisions, the second chapter in the “Reusables, decisions, and qualities” group.

Chapter 8 captured reusable patterns and practices. Chapter 9 captures the choices that shape the system, including the strategy choices from chapter 4.

I treat this chapter as a timeline. It often starts small, because you have not made many decisions yet. But it is still the start of the decision trail, and every meaningful choice you make later can land here.

Note

Chapter 9 is the beginning of the timeline. If your system lives for years, this chapter will grow with it. That is a feature, not a smell.

What belongs in chapter 9 (and what does not)

Chapter 9 of an arc42 document answers:

Which important architectural decisions were made, and what was the rationale?

What belongs here:

  • A decision timeline that is easy to scan. A table works well, because it stays compact even when the list grows.
  • Decisions that affect future work: structure, integration approach, deployment strategy, quality trade-offs, team boundaries, and platform choices. Strategy decisions from chapter 4 are a perfect match here.
  • For each decision, at least:
    • the decision itself
    • a short motivation
    • a link to details (optional, but recommended)
  • Considered options for decisions that can trigger future discussions. This is the part that prevents why did you not choose X? debates years later.
  • Links to where the decision shows up: chapters 48 and the relevant code or infrastructure artifacts.

What does not belong here:

  • Changes that are easy to undo. A practical test: if reverting the change does not hurt, it is probably too small to record as a decision.
  • Meeting notes or chat transcripts. Keep the record curated.
  • A concept guide. If the content is how we always do retries, put it in chapter 8 and link to it from here.

Tip

Decisions are made more often than teams realize. Writing them down is how you turn “tribal knowledge” into shared knowledge.

Decision timeline formats: list or ADRs

There is no required format here. The simplest valid version is a decision timeline with a short motivation per entry.

ADRs are an optional extension of that idea. They are not required by arc42, but they are a great format when a decision has real trade-offs and future consequences. This post is not an exhaustive guide on ADRs (there are entire books on that), but I will show you how to fit them into the concise style of arc42.

A simple rule of thumb for when to write an ADR: if the decision constrains future work and is hard to reverse, it deserves one. If reverting or changing it later is cheap, a one-liner in the timeline is enough.

A good workflow is:

  • Keep the timeline in chapter 9.
  • When an entry needs more detail, link to an ADR file.
  • The timeline stays readable, and the details stay available.

Tip

Use AI to draft ADRs and keep the timeline in sync

Writing ADRs can feel like a chore, but AI tools are good at turning messy context into a consistent record. Use them to draft the ADR in the agreed template, and to update the chapter 9 timeline entry at the same time (so the table stays the single index).

See How I use Claude Code to keep my architecture decisions on track by Willem Meints for a practical workflow.

ADR format

When I use ADRs, I put the decision before considered options. That gives a management overview without forcing readers through all the details.

A good ADR structure:

  • Metadata (date, status, decision makers)
  • Context (what problem or constraint triggered this decision)
  • Decision (one short paragraph: what did we decide)
  • Consequences (what changes and risks did we accept)
  • Considered options (with a short “why not” for each)

Tip

Put the decision in the ADR title. Future you will scan titles, not full pages of content.

Statuses are kept simple:

  • Pending (draft, not yet accepted)
  • Accepted
  • Superseded by ADR-XXX (the replacement decision)

I do not use a “deprecated” status. Deprecating something is itself a decision, and that decision supersedes the old one. Also here you have to write down the consequences of deprecation, will you clean up, do you accept dead-code, etc.

Warning

Treat accepted ADRs as immutable.

Do not rewrite an old ADR when the decision changes. Instead, mark it as “Superseded” and write a new ADR. This preserves the history of why you thought the original decision was a good idea at the time.

The minimum viable version

If you are short on time:

  • Start with a timeline table.
  • For each entry, write 1–3 lines of motivation.

That is already enough to preserve the reasoning.

Copy/paste structure (Markdown skeleton)

Use this as a starting point.

09-architectural-decisions.md
## 9. Architectural decisions
<Short intro: how do we capture decisions and keep them current?>
| Date | Decision | Status |
| :--- | :------- | :----- |
| ... | ... | ... |
<If you have decisions without ADRs, keep them here too.
The decision can just be plain text plus 1–3 lines of motivation.>

And an ADR template that matches the timeline:

### ADR-XXX <Decision statement>
- **Date:** YYYY-MM-DD
- **Status:** Pending | Accepted | Superseded by ADR-YYY
- **Decision makers:** <names or roles of the people who made the decision>
#### Context
<What problem or constraint triggered this decision?>
#### Decision
<One short paragraph: what did we decide?>
#### Consequences
- <what gets better>
- <what gets harder>
- <follow-up work / migration notes>
#### Considered options
1. <option A: short statement>
- **Pros**:
- <reason>
- **Cons**:
- <reason>
2. <option B: short statement>
- **Pros**:
- <reason>
- **Cons**:
- <reason>
#### References
- Affects: chapter 4/5/6/7/8 links (optional)
- Related concept: chapter 8.n (optional)
- Related code: <path or repo link> (optional)

Example (Pitstop)

Pitstop is my small demo system for this series. It is intentionally simple, so the documentation stays shareable.

Below is a small timeline table plus one ADR example.

9. Architectural decisions

DateDecisionStatus
2026-01-18ADR-001 Add degraded mode for workshop updatesAccepted

ADR-001 Add degraded mode for workshop updates

  • Date: 2026-01-18
  • Status: Accepted
  • Decision makers: Michaël (architect), workshop team (developers and testers)

Context

Workshop connectivity is not reliable in every garage. Status updates must remain possible during outages, and the system must recover safely.

Decision

Pitstop supports a degraded mode where the workshop UI can keep working while offline. Updates are queued locally and replayed later with idempotency keys to prevent double-apply.

Consequences

  • Workshop UI becomes stateful and needs conflict handling.
  • Backend needs idempotency storage and replay rules.

Considered options

  1. Reject updates when offline
    • Cons:
      • blocks the workshop and causes lost work
  2. Allow offline updates without idempotency
    • Cons:
      • unsafe replays and duplicate state changes on reconnect
  3. Local queue with idempotency keys
    • Pros:
      • safe replay, workshop keeps moving

References

  • Concept: degraded mode and idempotency
  • Scenario: appointment import and status updates

To browse the full Pitstop arc42 sample, see my GitHub Gist.

Common mistakes I see (and made myself)

  1. Not realizing you made a decision
    Many decisions show up as “small choices” in a sprint. If it shapes future work, record it.

  2. Skipping considered options
    This is how you get time-travel debates later. A short “why not” list is often enough.

  3. Decisions without consequences
    If there is no trade-off, it is probably not a decision. Write down what gets harder, not only what gets easier.

  4. No successor trail
    Decisions can be overturned with new insights. Do not delete the old one, supersede it and link forward.

  5. Logging everything
    If reverting the change does not hurt, it is probably too small for chapter 9. Keep this chapter high signal.

Done-when checklist

🔲 Chapter 9 contains a scan-friendly timeline of decisions.
🔲 Each entry has at least the decision and a short motivation.
🔲 Important decisions have considered options recorded.
🔲 Decisions link to where they show up (chapters 48).
🔲 Quality trade-offs connect to quality scenarios in chapter 10.

Next improvements backlog

  • Add lightweight review: ADRs are accepted before major implementation work starts.
  • Add cross-links from chapter 8 concepts back to the decisions that introduced them.
  • Supersede decisions when they are changed, and link to the new one.

Wrap-up

Chapter 9 is the memory of your architecture. It keeps the reasoning visible, even when the team changes and the code evolves.

Decisions and quality requirements reinforce each other. A decision often accepts a trade-off, and chapter 10 is where you make those trade-offs measurable.

Next up: arc42 chapter 10, “Quality requirements”, where we turn quality goals into concrete scenarios and checks.

/ Michaël Hompus

Chapter 8 is the patterns and practices chapter. It captures the reusable concepts that keep your domain code lean and your runtime scenarios readable: security, resilience, observability, integration rules, and other "plumbing" that should be consistent. In this article I explain what belongs in chapter 8, what to keep out, a minimal structure you can copy, plus a small example from Pitstop.

This post is about chapter 8: Cross-cutting concepts, the first chapter in the “Reusables, decisions, and qualities” group.

Chapters 5, 6, and 7 described structure, runtime, and deployment. Chapter 8 is where I document the reusable ideas that make those chapters readable and maintainable.

I think of chapter 8 as the patterns and practices chapter. It is often the “non-functional” code. Not the business logic, but everything needed to make that core behave correctly.

Note

arc42 calls this chapter “Cross-cutting concepts”. In practice, I often just call it “Concepts” as I treat it as “concepts relevant to the whole system at this level of detail”. For a product landscape that can mean platform-wide conventions. For a single microservice it can mean service-wide patterns and internal rules.

What belongs in chapter 8 (and what does not)

The main job of chapter 8 of an arc42 document is to answer:

Which patterns and practices should be applied consistently across the system, and how do we do that?

What belongs here:

  • Patterns and rules that apply across multiple building blocks or scenarios, not just one module.
  • Reusable conventions you want implemented consistently over time, even if they currently apply only once.
  • “Plumbing” that supports the domain but is not domain logic itself: the infrastructure behavior that makes core code work correctly.
  • Concept-level configuration behavior: what a mode or flag means and which behavior changes when it toggles. The where and how to configure it usually lives in chapter 7.
  • Shared domain definitions (aggregates, state machines, vocabulary) that every module depends on.

A simple test that works well:

  • If you want developers to implement something the same way in multiple places over time, document it here.
  • Link to it from the scenarios in chapter 6 and from relevant building blocks in chapter 5.

What does not belong here:

  • Feature-specific domain rules and workflows.
    Those belong in the building blocks (chapter 5) and scenarios (chapter 6).
  • A repeat of the runtime scenarios.
    Chapter 8 should let chapter 6 stay lean.
  • A raw list of configuration settings.
    Chapter 8 should explain what a setting means and why it exists, not list every key in the system. The full reference is better placed in chapter 7 or a dedicated config reference.
  • Highly local implementation details that are unlikely to be reused.
    Those belong close to the code, or in an ADR when it is a decision with consequences (chapter 9).
  • Hard architectural constraints or enterprise policies.
    Mandates like “Cloud First” or compliance rules belong in chapter 2. Chapter 8 documents the reusable patterns you designed, not the constraints you were forced to follow.

Tip

Chapter 8 is where you replace repeated paragraphs in chapter 6 with one link. That is a good trade.

Common concept categories

Not every system needs all of these, but this list helps as a starting point. Pick what applies:

  • Security: identity, RBAC/ABAC, tenant scoping, service-to-service auth, secret handling rules
  • Resilience: retries/backoff, circuit breakers, offline/degraded mode, idempotency rules
  • Observability: correlation IDs, structured logging, key metrics, tracing, alerting conventions
  • Data and consistency: source-of-truth rules, eventing/outbox patterns, read models, audit trail
  • Integration conventions: contract versioning, error mapping, rate limits, vendor protection
  • Configuration model: precedence rules, environment overrides, feature flags, safe defaults
  • Domain model: aggregate boundaries, state machines, shared vocabulary, key invariants
  • Test Strategy: test data management, standard tools, integration test patterns, performance constraints
  • UI/UX Patterns: standard layouts, error notifications, accessibility rules, design system integration

Who this chapter is for

Most of the time, chapter 8 is primarily useful for the dev team and maintainers. It prevents five different implementations of the same thing.

External stakeholders usually do not care about your retry policy or correlation ID format. They might care when it explains system guarantees (auditability, safety, recovery time), or when they want inspiration as your team is the shining example sharing their awesome implementation in their arc42 document. 💎😉

Chapter 8 vs. Chapter 9: Concepts vs. decisions

A common question: when does something belong in chapter 8 versus chapter 9 (ADRs)?

The boundary is clearer than it first appears:

  • Chapter 8 documents how we do X consistently: the pattern, the practice, the implementation standard.
  • Chapter 9 documents why we chose X over Y: the decision, the alternatives considered, the trade-offs, and the context that made the choice make sense.

They work together:

  • The ADR explains the choice and constraints.
  • The concept explains how to implement it correctly and where it shows up.

Linking them:
Always cross-reference. The concept should link to the ADR. The ADR should link to the concept.

Tip

If you document a concept without a decision, that is fine, many concepts emerge gradually.
If you document a decision without a concept to implement it, that might be a signal the decision is planned but not yet implemented.

Aggregates, entities, and the shared domain vocabulary

In many systems, there are a few domain concepts that show up everywhere: work orders, customers, assets, cases, incidents, whatever your core “things” are.

When those concepts apply across the whole application, I document their aggregate boundaries and entity responsibilities in chapter 8. Not because chapter 8 is a domain chapter, but because these definitions act like a shared rulebook.

This helps in three places:

  • It keeps chapter 5 focused on structure, not repeating the same domain definitions per building block.
  • It keeps chapter 6 readable, because scenarios can reference “WorkOrder” and everyone knows what that means.
  • It reduces accidental coupling, because aggregate boundaries become explicit.

What I put here is deliberately lightweight:

  • Aggregate name and purpose
  • What it owns (entities, value objects)
  • Key invariants (rules that must always hold)
  • State transitions and lifecycle notes
  • Identity and scoping rules (IDs, tenant/site boundaries)
  • Events published or important integration touch points (high level)

If you need a full data dictionary or complete schema documentation, do not force it into this chapter. Link to a domain model reference, or split it into a separate document and keep chapter 8 as the “shared rules” summary.

Tip

While documenting these core terms, check if they are already in the glossary (chapter 12). If a term is strictly structural, keep it here. If it is business language used by stakeholders, ensure it lands in chapter 12 too.

How to keep chapter 8 from becoming a junk drawer

This chapter is vulnerable to entropy. Everything is a “concept” if you stare at it long enough.

A few guardrails that help:

  • Prefer “rules + rationale” over “technology lists”.
  • Keep each concept small:
    • what it is
    • why it exists
    • how to implement it
    • how to test or verify it
    • where it shows up (links to scenarios, building blocks, ADRs)
  • If a section becomes a wall of text, split it: move low-level specifics into a code-linked doc and keep chapter 8 as the overview.
  • When a concept evolves, document both the current standard and the migration path. Mark old approaches explicitly as “legacy” or “deprecated” with a timeline, and link to the ADR (chapter 9) that explains why it changed. This prevents new code from following outdated patterns while giving teams visibility into what they need to update.

The minimum viable version

If you are short on time, aim for:

  • 3–6 concepts that either:
    • already affect multiple parts of the system, or
    • are patterns you want future work to follow (even if they currently apply once)
  • For each concept, include:
    • a short description
    • the key rule(s)
    • where it shows up (links)
    • one or two implementation notes that prevent mistakes

That is enough to keep future work consistent.

Copy/paste structure (Markdown skeleton)

Use this as a starting point. Keep it flexible.

08-concepts.md
## 8. Concepts
<Short intro: what concepts matter for this system and why?>
### 8.n <Concept name>
<1–3 short paragraphs: what it is and why it exists.>
#### Rules (optional)
- <rule 1>
- <rule 2>
#### Implementation (example-level, not every detail)
- <how it is implemented in this system>
- <where it lives in the code, if useful>
#### Configuration (optional)
- <which settings affect this concept and what they mean>
- <link to chapter 7 for where it is configured>
#### Verification (optional)
- <how do we know it works: tests, logs, dashboards, runbooks>
#### Where it shows up
- Scenario: chapter 6.x (link)
- Building block: chapter 5.x (link)
- ADR: chapter 9.x (link)

Note

Do not force a rigid template on every concept. Some concepts need a rules section, some need a diagram, some need one paragraph and a link. Consistency helps, but clarity helps more.

Example (Pitstop)

Pitstop is my small demo system for this series. It is intentionally simple, so the documentation stays shareable.

Below are four concept examples that make chapter 6 easier to read, and make chapter 7 configuration feel meaningful instead of arbitrary.

8.1 Identity and access (RBAC)

Pitstop uses role-based access control (RBAC) to keep workshop actions safe and auditable. The UI can hide buttons, but the server enforces authorization. The UI is not a security boundary.

Rules

  • Every endpoint that reads or changes work orders requires an explicit policy.
  • Authorization is validated server-side for both HTTP and real-time actions.
  • Claims include a garageId to scope access per site.

Implementation

  • Auth: JWT bearer tokens.
  • Authorization: policy-based checks, mapped from roles and claims.

Claims (example)

  • role: Mechanic, Foreman, ServiceAdvisor
  • garageId: used for tenant or site scoping
  • permissions: optional fine-grained list for exceptions (for example discount approval)

Where it shows up

  • Scenarios: status updates and overrides in chapter 6.
  • Deployment: token validation settings and identity provider wiring in chapter 7.

8.2 Work order (aggregate / domain model)

The work order is the central aggregate in Pitstop. Every module, scenario, and UI revolves around it. Documenting it here gives the whole team a shared definition to build against.

Aggregate boundary

A work order owns its tasks, status, notes, and parts dependencies. It does not own the appointment (that belongs to the planning service) or the customer record.

Lifecycle (state machine)

Work order lifecycle diagram
  • Only forward transitions are allowed by default.
  • WaitingForPartsInProgress can toggle when parts arrive or a new dependency is found.
  • A Foreman can force-transition to any state (override).

Key invariants

  • A work order always has exactly one active status.
  • Status changes are audited (who/when/why, see concept 8.4).
  • Identity: WO-{sequence}, scoped by garageId.

Where it shows up

  • Building blocks: Work Order Module in chapter 5.
  • Scenarios: every chapter 6 scenario references work order state.
  • Audit: status changes feed the audit log (concept 8.4).

8.3 Degraded-mode workshop operation (local queue + idempotency)

Workshop connectivity is not always reliable. Pitstop supports a degraded mode where the workshop UI can keep working and sync later.

Rules

  • Workshop updates are queued locally when offline.
  • Every queued item has an idempotency key so replays do not double-apply.
  • Replay happens in order. Hard conflicts stop the replay and require user resolution.

Implementation

  • Workshop UI stores updates in a local outbox queue (for example IndexedDB).
  • Each item includes an idempotency key derived from work order, version, and actor context.

Queue item (example)

{
"idempotencyKey": "WO-7781:v42:mechanic-17:2026-01-12T10:41:00Z",
"workOrderId": "WO-7781",
"command": "ChangeStatus",
"payload": {
"status": "WaitingForParts",
"note": "Brake pads not in stock"
},
"queuedAt": "2026-01-12T10:41:00+01:00"
}

Configuration behavior

  • If Pitstop:ConnectivityMode = OfflineFirst, the UI queues first and sends async.
  • If OnlineFirst, the UI sends immediately and queues only on failure.

The meaning of ConnectivityMode is documented here. Where it is configured (env vars, config files) is documented in chapter 7.

Where it shows up

  • Scenarios: status update flows in chapter 6.
  • Deployment: the ConnectivityMode setting in chapter 7.

8.4 Observability

Every request and event carries a correlationId so ops can trace a flow end-to-end. Logs are structured (JSON), and a small set of metrics drives the alerting that lets ops sleep.

Rules

  • Every log entry includes correlationId, workOrderId (when applicable), and garageId.
  • Metrics are kept small and actionable:
    • sync_queue_depth: are outbound updates piling up?
    • status_update_latency_ms (p95): is the workshop experience degrading?
    • ws_connected_clients: are workshops connected?
  • Alert example: sync_queue_depth > 100 for 10 minutes → vendor down or credentials broken.

Where it shows up

  • Scenarios: every chapter 6 flow carries correlationId.
  • Deployment: log sink and dashboard configuration in chapter 7.

To browse the full Pitstop arc42 sample, see my GitHub Gist.

Common mistakes I see (and made myself)

  1. Treating “cross-cutting” as a hard gate
    Even if you are documenting something small like a microservice, service-wide concepts are still useful. The chapter title does not need to police you.
    Rename the chapter to “Concepts” if that helps, but do not skip it just because you think “cross-cutting” means “multi-service”.

  2. Waiting until a pattern appears everywhere
    If you already know a rule should become standard, document it early. That is how you steer future work. Arc42 can start at the drawing table, even without any line of code written yet.

  3. Turning chapter 8 into a dump
    A list of random libraries is not a concept chapter. Prefer rules, rationale, and where it shows up. Future team members or maintainers should be able to read this chapter and understand the key patterns without needing to read every line of it.

  4. Repeating concept explanations in every scenario
    If you notice that chapter 6 starts to contain the same text multiple times, move it here and link to it.

  5. No link back to reality
    If a concept never shows up in code, runtime scenarios, or configuration, it is probably planned but not yet implemented. That is fine, but mark it clearly and revisit it. Maybe new insights have emerged and it is no longer the right pattern.

Done-when checklist

🔲 The chapter contains concepts that are reused or intended to be reused over time.
🔲 Each concept includes at least one actionable rule, not only a description.
🔲 Concepts link to where they show up (chapters 5, 6, 7, and later ADRs in chapter 9).
🔲 The chapter helps keep runtime scenarios lean by avoiding repeated explanations.
🔲 A maintainer can implement a new feature without reinventing logging, retries, idempotency, or authorization.

Next improvements backlog

  • Add links to code locations when concepts map cleanly to modules or packages.
  • Add verification notes for concepts that can fail in production (dashboards, alerts, runbooks).
  • Add concept-level configuration tables only for settings that change behavior significantly.
  • Split large concepts into “overview here, details in a linked doc” when they grow.

Wrap-up

Chapter 8 is where I capture the reusable solutions that make the rest of the document cheaper to maintain. It keeps the domain code focused, and it keeps chapter 6 readable.

Next up: arc42 chapter 9, “Architectural decisions”, where we record all the decisions that we made along the way.

/ Michaël Hompus

This post is about chapter 7: Deployment view, the last chapter in the "How is it built and how does it run" group. Chapter 7 answers: where do your building blocks run, in which environments, and with which settings? This chapter turns "it works on my machine" from tribal knowledge into shared documentation. No more guessing which settings matter or where things actually run.

This post is about chapter 7: Deployment view, the last chapter in the “How is it built and how does it run” group.

Small milestone: chapter 7 means we are past the halfway point of the 12 arc42 chapters.

Chapter 5 gave us the map (building blocks). Chapter 6 showed how those blocks collaborate at runtime. Chapter 7 answers the next question: where do those blocks run, in which environments, and with which settings?

Note

This chapter turns “it works on my machine” from tribal knowledge into shared documentation. No more guessing which settings matter or where things actually run.

Also: “my machine” can be a perfectly valid environment. If onboarding and local dev matter, document that setup as a real deployment variant.

What belongs in chapter 7 (and what does not)

The main job of chapter 7 of an arc42 document is to answer:

Where does the system run, how is it wired, and what needs to be configured to make it behave correctly?

What belongs here:

  • A deployment overview of nodes, environments, and their connections. Think: hosts, clusters, networks, segments, and the paths between them.
  • A mapping from building blocks to infrastructure: which blocks run where, and which ones are “shared” vs “per environment/per site”.
  • The runtime configuration that is required to run the system and that changes behavior: environment variables, config files, feature flags, connection strings, default values, and required secrets (at least at a reference level).
  • Operational concerns that affect how the system holds up: what scales, what is isolated, what happens when a node goes down.
  • Trust boundaries and data classification: which networks are public vs. private, and where sensitive data is allowed to live.
  • Persistence strategy: especially for containerized setups, explicitly state where state lives (volumes, managed databases) and if it is backed up.
  • Quality and/or performance features of the infrastructure when they matter: expected throughput, latency constraints, availability targets, bandwidth limitations, or time synchronization (NTP) requirements.
  • Links to deployment assets when they are the source of truth: Dockerfiles, Helm charts, Kustomize overlays, Terraform/Bicep/ARM, compose files, install scripts, etc. If your base image choice matters (for size, security, or compliance), add a short note on why.

Tip

If you have full Infrastructure as Code (IaC), Chapter 7 is the map; your Terraform or Bicep is the construction crew. Do not duplicate every setting from your IaC here. Instead, explain the topology that the code creates.

What does not belong here:

  • A full re-explanation of your building blocks or domain responsibilities. This chapter is about placement and wiring, not reintroducing the system.
  • Detailed runtime scenarios (“and then it calls X”) unless the scenario is specifically about deployment behavior (e.g., failover sequence, blue-green switch, cold start, disaster recovery).
  • Interface payload catalogs and protocol specs. Link to where contracts live, and keep chapter 7 focused on infrastructure and configuration.
  • A giant unstructured dump of “every setting we ever had” without context. Configuration belongs here, but it needs a structure: defaults, required vs optional, and what it influences.

Where to document configuration

Note

Strictly speaking, arc42 does not prescribe a configuration section in chapter 7. The template typically places the what of a setting (meaning, default, contract) in chapter 5 (building block interfaces) and the how (override strategy, config patterns) in chapter 8 (crosscutting concepts). Chapter 7 itself only covers the where: which node, which manifest, which secret store.

I prefer to consolidate configuration in chapter 7. When a newcomer asks “what do I need to configure to make this run?”, I want the answer to be in one place, right next to the infrastructure it runs on. Splitting it across chapters 5, 7, and 8 is structurally clean but practically hard to navigate.

If you have a separate place where configuration is documented (runbook, ops handbook, generated config reference), link to it and keep chapter 7 as the map.

If you do not have a separate configuration reference, chapter 7 is a practical home for it: everything needed to make the application run, and everything that changes behavior per environment.

That usually includes:

  • environment variables and configuration keys
  • config files and naming conventions (appsettings.{Environment}.json, .env, mounted files, etc.)
  • default values
  • “required in production” vs “optional”
  • where it is set (deployment manifest, secret store, CI variables)
  • what it impacts (behavior, performance, safety, compliance)

Warning

Never document actual secret values (API keys, passwords, connection strings) in this chapter. Only document the names of the secrets or which vault they live in.
If I see a potential password in a markdown file, I will find you! 👮‍♂️😉

A practical structure that stays readable:

  • a short “configuration model” section (how config is loaded/overridden)
  • a table of key settings (only the ones that matter)
  • links to the “full reference” if/when you have one

The minimum viable version

If you are short on time, aim for this:

  1. One main deployment diagram for the most important environment (often production-like).
  2. A short mapping table: which building blocks run where.
  3. A small “runtime configuration” section: the 5–15 settings that decide behavior, plus where they live.

That is already enough to stop most “but it worked yesterday” surprises.

Copy/paste structure (Markdown skeleton)

Sections 7.1 and 7.2 follow arc42’s infrastructure levels. I add a dedicated configuration section per infrastructure element; arc42 would split that across chapters 5 and 8 (see the note above).

For multiple environments, arc42 suggests copying the 7.1 structure. Keep it small and add depth only when it matters.

07-deployment-view.md
## 7. Deployment view
<Short intro: what environments exist and what matters operationally?>
### 7.1 Infrastructure Level 1
<Diagram: nodes, connections, trust boundaries, and where blocks run.>
```plantuml
@startuml
skinparam shadowing false
node "Host / Cluster" {
node "Runtime" {
artifact "Component A" as A
artifact "Component B" as B
}
database "Database" as DB
}
cloud "External Systems" {
node "Neighbor" as N
}
A --> B
B --> DB
B --> N
@enduml
```
#### Motivation
<Why this deployment topology? What drove the decisions?>
#### Quality and/or performance features (optional)
<Relevant infrastructure qualities: throughput, latency, availability, bandwidth.>
#### Mapping (what runs where)
| Building block | Runs on | Notes |
| :------------- | :------ | :---- |
| ... | ... | ... |
### 7.2 Infrastructure Level 2 (optional)
<Zoom into specific infrastructure elements from Level 1 that need more detail.>
#### 7.2.1 <Infrastructure element>
<Internal structure, operational details for this element.>
##### Configuration
<Runtime configuration for components running on this element.
arc42 would place setting definitions in chapter 5 and the override strategy in chapter 8;
I keep them here so everything needed to deploy lives in one place.>
| Key / setting | Default | Required | Where set | What it influences |
| :------------ | :------ | :------- | :-------- | :----------------- |
| ... | ... | ... | ... | ... |

Example (Pitstop)

Pitstop is my small demo system for this series. It is intentionally simple, so the documentation stays shareable.

This is what chapter 7 looks like when filled in.

7. Deployment view

7.1 Infrastructure Level 1 - Single garage

Situation: one garage runs Pitstop on-prem on a single Docker host; the UIs and backend run as containers next to the database.

Deployment view

Motivation

Small garages need a self-contained setup that works on a single machine without external dependencies. All components share one host to keep operations simple.

Mapping

Building blockRuns onNotes
Workshop Management ServiceDocker containerMain backend
Customer Management ServiceDocker containerShares host with backend
Pitstop UIDocker containerServed via reverse proxy
SQL DatabaseDocker containerPersistent volume on host
Message BrokerDocker containerRabbitMQ, single node

7.1 Infrastructure Level 1 - Multi-site

Situation: a garage chain wants central reporting and audit, but each site needs fast workshop responsiveness even with shaky connectivity.

  • Central DB + audit store; site-level caches for workshop responsiveness.
  • Reporting can run off read replicas.

Motivation

Garage chains need central reporting and audit, but the workshop still needs to feel fast locally. Sites must keep working even when connectivity to the central system is unreliable.

Mapping

Building blockRuns onNotes
Workshop Management ServiceSite Docker hostLocal-first, syncs to central
Customer Management ServiceCentral clusterShared across sites
Pitstop UISite Docker hostServed locally for responsiveness
SQL DatabaseCentral clusterPrimary store, replicas per site
Message BrokerCentral clusterFederated; site-level queue for resiliency
Reporting ServiceCentral clusterReads from replicas
Audit/Event LogCentral clusterAppend-only, retained centrally

Operational notes

  • Monitoring: request latency, WS connection health, sync queue depth, retry rate.
  • Backups: DB daily + audit log retention policy.
  • Security: network segmentation; outbound allowlist to planning/PSP endpoints.

7.2 Infrastructure Level 2

7.2.1 Docker host

Single Linux host running Docker Engine. All Pitstop containers and the database run here. Images are built with multi-stage Dockerfiles to keep the final image small and free of build tooling.

Configuration

Pitstop behavior differs per garage and network reliability. These settings are owned by Ops and injected via container environment variables or mounted config files.

Key setting: ConnectivityMode

  • OnlineFirst (default): normal operation, real-time updates preferred
  • OfflineFirst: prioritize local queueing + aggressive retries (workshop-heavy garages / flaky Wi-Fi)

Where configured

  • container env var Pitstop__ConnectivityMode or appsettings.{Environment}.json
appsettings.Production.json
{
"Pitstop": {
"ConnectivityMode": "OfflineFirst",
"Realtime": {
"Transport": "WebSocket",
"FallbackToPollingSeconds": 5
},
"Sync": {
"RetryPolicy": "ExponentialBackoff",
"MaxRetries": 10
}
}
}

To browse the full Pitstop arc42 sample, see my GitHub Gist.

Common mistakes I see (and made myself)

  1. Only drawing “prod” and ignoring “dev” If local dev and onboarding matter, treat them as a real deployment variant. It does not have to be pretty, it has to be accurate.

  2. Mixing behavior and placement Chapter 7 is where things run and how they connect. Behavior generally belongs in runtime scenarios (chapter 6). Deployment-driven behavior (failover, DR, scaling) should be documented either here (when it depends on topology/environment) or in chapter 8 (when it is a reusable concept/pattern).

  3. Configuration without structure A thousand keys in a wall of text is not documentation, it is punishment. Group by domain/feature, document defaults, and call out which values change behavior.

  4. Forgetting operational boundaries Who owns what? Which node is “managed by ops” vs “managed by the team”? Which dependencies are inside your control, which are not?

  5. No traceability to building blocks If readers cannot map a box in the diagram back to a building block from chapter 5, the deployment view becomes “a nice picture” instead of a useful model.

Done-when checklist

🔲 The main environments/variants are described (or explicitly out of scope).
🔲 Building blocks are mapped to nodes/locations.
🔲 Key runtime configuration is documented: defaults, where set, and what it changes.
🔲 Operational concerns are at least acknowledged (monitoring, backups, security boundaries).
🔲 A newcomer can answer: “where does this run?” and “what do I need to configure?”

Next improvements backlog

  • Add an explicit mapping table for each relevant environment or variant.
  • Link the actual deployment assets (Dockerfile, Helm, Terraform, compose) where appropriate.
  • Add a small “secrets and trust boundaries” note (what must be protected, where it lives).
  • Add operational SLO/SLA expectations if availability and latency are key goals.

Wrap-up

Chapter 7 is the reality check: the system, placed on real infrastructure with real constraints. It is where “works in my head” becomes “works in an environment”.

With chapter 7 done, the full “How is it built and how does it run” group is complete.

Next up is the “Reusables, decisions, and qualities” group, starting with arc42 chapter 8, “Concepts”, where we document the reusable cross-cutting ideas (auth, logging, error handling) without duplicating them in every scenario.

/ Michaël Hompus

Chapter 6 describes runtime behavior: how building blocks collaborate in the scenarios that matter, including alternatives, exceptions, and the bits that tend to hurt in production. It is also the third chapter in the "How is it built and how does it run" group. In this article I show what belongs in chapter 6, what to keep out, a flexible structure you can copy, plus a small example from Pitstop.

This post is about chapter 6: Runtime view, the third chapter in the “How is it built and how does it run” group.

Chapter 5 gave us the map (building blocks and responsibilities). Chapter 6 shows how that map is used in real life: who talks to whom, in what order, and why.

The arc42 template keeps this chapter intentionally “empty” by default. It is basically a container for scenarios: one subchapter per runtime process you want to document.

Note

Chapter 6 can grow a lot. That is not a smell.
If users and external neighbors interact with your system, those flows are architecture.

What belongs in chapter 6 (and what does not)

Chapter 6 of an arc42 document answers:

How does the system behave at runtime in the scenarios that matter?

What belongs here:

  • All relevant runtime scenarios where there is meaningful interaction:
    • user interactions that change state or trigger workflows
    • integrations with external neighbors (inbound/outbound)
    • operationally important processes (batch runs, scheduled jobs, import/export)
    • flows that embody key quality goals (latency, availability, auditability, resilience)
  • For each scenario: the collaboration between the building blocks (names consistent with chapter 5).
  • Alternatives and exceptions where they exist: timeouts, retries, idempotency, partial failures, degraded/offline behavior, manual fallbacks.
  • Notes that help people reason about runtime behavior: correlation IDs, observability points, ordering guarantees, consistency expectations.

Tip

If a neighbor appears in the context view (chapter 3), try to let it show up in at least one runtime scenario over time.
If it never appears, that is useful feedback: maybe it is not a real neighbor, maybe it is background data, or maybe the relevant scenario is still missing.
Either way, treat it as a prompt to revisit the context view in your next iteration.

What does not belong here:

  • Long descriptions of static responsibilities and decomposition. This chapter is about collaboration over time, not “what exists”.
  • A full contract catalog or protocol reference. Link to specs where they live; keep this chapter focused on behavior and responsibilities.
  • Environment-specific deployment details. The runtime behavior should still make sense even if you deploy differently.
  • Low-value diagram noise: repeating “return payload” on every arrow when nothing is transformed, or expanding every internal hop when it adds no architectural insight.
  • Cross-cutting flows that are the same everywhere, such as the OAuth/OIDC login flow. That belongs in chapter 8 as a reusable concept (unless you are literally building an auth service 😅).

Note

Runtime view is where architecture stops being a set of boxes and becomes a set of promises: “this happens”, “this must not happen”, “this is how we recover”.

Diagrams (and how to keep them readable)

Sequence diagrams for sequential flows

Sequence diagrams are excellent at showing who talks to whom, in what order, and why it matters.

Focus on what changes to keep diagrams readable:

  • Request/response pairs
    Show them only when data transforms or meaning shifts. Skip the “return OK” arrows that just echo back what was sent.
  • Internal hops
    Compress them when a layer simply passes data through without adding architectural insight. Three layers calling each other with identical payloads? Show it as one arrow across the boundary.
  • Scenario intent
    Lead with it, not implementation noise. Readers should grasp the essential flow in seconds, then dive into details if they need them.

Example trade-off

A “create order” scenario does not need you to diagram every internal service call. Show the user action, the boundary entry point, the database write, and the response back. Skip the middleware, logging, and validation layers unless they embody a quality goal or failure path.

BPMN for more complex flows

When scenarios have a lot of branching (alt/if/else), loops, delays, sequence diagrams can become a spaghetti scroll. 🍝

That is where BPMN often shines: it stays readable as complexity grows. Camunda Modeler is my go-to tool.

Trade-off: BPMN is typically stored as XML, which is not fun to review in a repo. So exporting diagrams as images becomes an extra step. Just keep the source file and the exported image together.

Tip

You do not need to pick one diagram type for everything. Consistency helps, but clarity helps more.

The minimum viable version

If you are short on time, aim for this:

  1. Start with 1–3 scenarios that cross boundaries: user → system and system → neighbor.
    The first user flow, the first integration, and the first “what if it fails?” path.
  2. For each scenario, add:
    • the intention (what we are trying to achieve)
    • the main participants (aligned with chapter 5)
    • the happy path
    • the first important exception (retry/fallback/manual procedure)
  3. Keep the scenario sections small. Grow later.

Copy/paste structure (Markdown skeleton)

Use this as a starting point. Keep it flexible.

06-runtime-view.md
## 6. Runtime view
<If you have a lot of scenarios, add a table of contents here with links to each scenario.>
### 6.n <Scenario name>
<Write a short intro for the scenario. A few lines is enough: what happens, why it matters, and what "done" looks like.>
_Optional prompts you can include (pick what helps):_
- _Intention:_ what are we trying to achieve?
- _Trigger:_ what starts this (user action, event, schedule)?
- _Participants:_ which building blocks and neighbors are involved?
- _Notes:_ assumptions, alternatives, open topics, relevant links (e.g., ADRs, specs, concepts)
```plantuml
@startuml
title Scenario: <name>
autonumber
actor "User/External" as Actor
participant "UI/Client" as UI
participant "Backend" as Backend
participant "Integration/Adapter" as Integration
database "DB" as DB
Actor -> UI : action
UI -> Backend : request
Backend -> DB : read/write
Backend -> Integration : call out / publish
Integration --> Backend : ack/response
Backend --> UI : result
@enduml
```
**Exceptions and alternatives (optional):**
- <timeout><behavior/fallback>
- <partial failure><behavior/fallback>
- <manual recovery step><who does what>

Tip

If your chapter becomes large (and it might), group scenarios by theme or functionality: “User flows”, “Integrations”, “Operational jobs”, etc.

Example (Pitstop)

Pitstop is my small demo system for this series. It is intentionally simple, so the documentation stays shareable.

6.1 Scenario: Create/update order on appointment import

Why this scenario matters

  • It hits the core value: planning ↔ execution sync.
  • It exercises consistency, auditability, and integration boundaries.
Runtime view

Failure / exception notes

  • Planning API unavailable → Sync queues outbound updates with retry + backoff.
  • Duplicate appointment updates → idempotency key (appointmentId + version/timestamp).
  • Conflicting edits → “last-write-wins” only for safe fields; status changes may require rules (e.g., foreman override).

To browse the full Pitstop arc42 sample, see my GitHub Gist.

Common mistakes I see (and made myself)

  1. Only documenting the happy path
    Architecture shows up in failure handling, retries, timeouts, and recovery.

  2. Diagrams that do not match your building blocks
    If names and boundaries differ from chapter 5, readers lose their mental model.

  3. Diagram noise instead of insight
    Do not waste pixels on repetitive returns and unchanged payloads.
    Compress internal hops when they add no architectural value.

    User → API → Service → Repository → DB → Repository → Service → API → User
    User → API → Domain write (DB) → API → User
  4. Avoiding “big” because it might become big
    Documenting 27 scenarios is not wrong.
    It becomes wrong when nobody can find anything.
    Group them, index them, or split into linked documents when the system is large.

  5. No external stakeholder recognition
    If a neighbor/system owner cannot recognize their part in the flow, you probably did not document it clearly enough.

Done-when checklist

🔲 Relevant user and neighbor interactions are covered (or explicitly postponed).
🔲 Scenario diagrams use building block names consistent with chapter 5.
🔲 Key alternatives/exceptions are documented where they matter.
🔲 The chapter is navigable: scenarios are grouped and titled clearly.
🔲 Readers can explain “what happens when…” without guessing.

Next improvements backlog

  • Add scenarios for remaining neighbors until the context view is “explained by runtime”.
  • Add idempotency/retry rules for integrations that can duplicate messages.
  • Add observability notes per scenario (correlation IDs, key logs/metrics).
  • Split scenarios into separate linked arc42 documents if the system warrants it.

Wrap-up

Chapter 6 is where your architecture becomes visible in motion.

Next up: arc42 chapter 7 “Deployment view”, where we map building blocks onto infrastructure and environments.