Playbook: Alex for Software Developers

Your reference for applying Alex to architecture decisions, code quality, debugging, documentation, testing, and incident response. Ready-to-run prompts — built around the hard parts of real software work, not the textbook exercises.


What This Guide Is Not

This is not a habit formation guide (see Self-Study Guide for that). This is a domain use-case library — the specific ways Alex supports professional software development work.


Where to Practice These Prompts

Every prompt in this guide works with any AI assistant you already use — GitHub Copilot, ChatGPT, Claude, Gemini, or others. The prompts are the skill; the tool is just where you type them. If you already have a preferred tool, start there.

For the deepest experience, the Alex VS Code extension (free) was built for these workflows. It understands software development context, lets you save what works with /saveinsight, and keeps your playbook and exercises right inside the editor where you already work.

You don’t need a specific tool to benefit. You need the discipline of reaching for AI when the work is genuinely hard — not just when it’s repetitive.


Core Principle for Developers

The developer who uses AI well is not one who types less. It is one who thinks at a higher level of abstraction — spending time on design, tradeoffs, and quality while AI handles the mechanical generation. The risk is the reverse: using AI to generate code you have not understood, which creates the appearance of progress while accumulating invisible technical and security debt.

Your primary discipline with Alex: stay in the tradeoff layer. Use it to think through options, pressure-test decisions, and find holes in your reasoning before they become production incidents.


The Seven Use Cases

1. Architecture Decision Records (ADRs)

The developer’s architecture challenge: Architecture decisions are made every day in software teams, but most are never documented. The decision gets made, the rationale is in someone’s head, and six months later the team is debating undoing the decision without knowing why it was made. ADRs are the practice of capturing the decision, the context, the alternatives considered, and the consequences — so future-you and future-teammates do not have to reverse-engineer the thinking.

Prompt pattern:

I am making an architecture decision: [describe the decision].
Context: [what the system does, tech stack, team size, constraints].
Options I am considering:
  Option A: [description]
  Option B: [description]
  Option C: [description if applicable]
My current lean: [which option and why, in your words].

Help me:
1. Challenge my reasoning — what am I not seeing?
2. Identify the hidden costs of my preferred option
3. Articulate what must be true for my preferred option to be the right call
4. Draft an ADR that honestly represents the tradeoff, not just the conclusion

Follow-up prompts:

What does this decision look like in three years under [growth / scaling / team change]? Does my preferred option still hold?
Argue for the option I am not choosing. Give me the strongest case for it.
What organizational or team changes would invalidate this architecture decision — and how likely are those changes?

Try this now: Your team is debating whether to migrate from a monolithic REST API to microservices. You have 12 developers, a Django monolith serving 50K daily users, and a 6-month runway before a major feature launch. Paste that context into the ADR prompt above and ask Alex (or ChatGPT, or Claude) to challenge your reasoning. You will get back questions you had not considered — and that is the point.


2. Code Review and Technical Critique

The developer’s review challenge: Most code reviews are style enforcement with a veneer of design feedback. The hard review questions — “should this component exist at all,” “is this abstraction hiding complexity or managing it,” “what happens to this code when the next three requirements come in” — are avoided because they are more expensive, more uncomfortable, and require more context than verifying naming conventions.

Prompt pattern:

Review this code:
[paste code block]

Focus on:
1. Correctness — will this do what it says under edge cases?
2. Clarity — can the next developer understand what this is doing and why?
3. Design — is this abstraction appropriate for the problem?
4. Maintainability — what happens when the requirements change in a predictable way?
5. Security — any obvious vulnerabilities or dangerous patterns?

Skip style enforcement — I have a linter.

Follow-up prompts:

What is the most likely bug in this code that my tests would not catch?
What is the best version of this code? Show me and explain each change.
What would a senior engineer question in a review of this code?

3. Debugging Reasoning

The developer’s debugging challenge: Debugging is a hypothesis-driven discipline. The failure mode is not-enough-hypotheses: attaching to the first plausible explanation and debugging toward confirming it rather than testing it. The developers who debug fast are the ones who generate multiple hypotheses quickly, design experiments that falsify rather than confirm, and do not confuse “the code looks fine” with “the code is fine.”

Prompt pattern:

I have a bug:
Expected behavior: [what should happen].
Actual behavior: [what is happening, including error messages verbatim].
Reproduction: [how to reproduce — deterministic or intermittent?].
What I have already ruled out: [your debugging history].
Relevant code: [paste the relevant section].
Environment: [language, runtime, version, any recent changes].

Generate:
1. Three hypotheses about root cause, ranked by probability
2. The test that would distinguish between hypotheses
3. Any patterns in this error class I should know about

Follow-up prompts:

My fix did not work. Here is what I tried and what happened: [describe]. What does this tell us?
This bug only appears in production. What environmental differences between prod and dev are most likely causing this?
This is an intermittent failure. Design a logging or tracing strategy that will capture enough context to diagnose it the next time it occurs.

4. Technical Documentation

The developer’s documentation challenge: Documentation written by the person who built the system is usually wrong in a specific way: it documents what the system does, not what someone who has never seen it needs to know. The builder skips the conceptual model, the “why,” and the gotchas because they are obvious to the builder. They are not obvious to anyone else.

Prompt pattern:

I need to write [doc type: README / API reference / architecture overview / runbook / onboarding guide] for [system/component].
What this system does: [plain English].
Primary audience: [who will read this — new developer / external consumer / ops team].
The three things someone always asks about this system: [what every question is about].
The three things that always go wrong: [honest list].

Draft documentation that answers what someone needs to know — not everything I know.

Follow-up prompts:

Read this doc as a developer who has never seen this system. What question does it fail to answer?
What is the one sentence that would have saved me the most time when I first encountered this codebase?
Write the onboarding paragraph that tells a new developer what this system does, why it exists, and the three things that always go wrong.

5. System Design Exploration

The developer’s design challenge: System design interviews are a known format and everyone prepares for them. Actual system design — when you are building something real with real constraints and no clean answer — is harder. The problem is that real system design requires sitting with ambiguity long enough to actually understand the tradeoffs, and time pressure plus organizational momentum make that uncomfortable.

Prompt pattern:

I am designing [system/feature].
Requirements: [functional].
Non-functional requirements: [scale, latency, availability, cost, security].
Constraints: [what I cannot change — existing tech stack, team expertise, timeline].
My current sketch: [describe your thinking, even if rough].

Help me:
1. Identify what I have left ambiguous that needs a decision
2. Find the failure modes I have not accounted for
3. Push on the non-functional requirements — which will I actually hit with this design?
4. Propose alternatives to my current approach and why they might be better

Follow-up prompts:

What is the simplest version of this design that still meets the non-functional requirements — and what would I gain by starting there?
Identify the implicit assumptions in this design that become false at 10x the current scale.
What is the failure mode of this design that will generate the first production incident — and can I build detection for it now?

6. Testing Strategy and Test Design

The developer’s testing challenge: Tests are written to pass, not to find bugs. The instinct is to write tests that exercise the happy path of code you just wrote — which proves the code does what you think it does, not that it does what it actually should do. Good test strategy starts with asking “what are the ways this can fail” rather than “what should I write to get coverage.”

When to use: Designing tests for new features, increasing confidence in legacy code, deciding where to invest in test infrastructure, or evaluating whether an existing test suite is actually protective.

Prompt pattern:

I need to write tests for [system/component/function].
What it does: [description].
Current test coverage: [what exists — if anything].
What failure would be most costly to miss: [your honest assessment].
Constraints: [unit vs. integration vs. e2e, speed requirements, external dependencies].

Generate:
1. The test cases I should prioritize to build the most protection with the least test code
2. Edge cases and failure modes I have probably not thought of
3. The distinction between what should be unit-tested vs. integration-tested here
4. A test that would catch the most likely bug in this code

Follow-up prompts:

I have 80% code coverage and still got a production bug. Why might that be and what type of testing would have caught it?
I am testing a function with external dependencies. How do I structure tests so they are actually useful without becoming a mock-everything anti-test?
I need to add tests to legacy code that has no test infrastructure. What is my sequencing for maximum safety with minimum effort?

7. Incident Response, Post-Mortems, and On-Call

The developer’s incident challenge: The instinct during an incident is speed — trying things fast, rolling back, restarting services — without the discipline of a hypothesis. Actions taken in panic during an incident can mask the original cause, create new problems, and make post-incident analysis nearly impossible. The developers who handle incidents well are the ones who slow down enough to form and test hypotheses even when the pressure to “do something” is overwhelming.

When to use: During an active incident (for diagnostic structure), writing post-mortems, or building better on-call practices.

Prompt pattern (active incident):

Active incident: [describe symptoms, scope, user impact, time elapsed].
What I have already ruled out: [list].
Current state of the system: [metrics, errors, what is and is not affected].
Recent changes: [deployments, config changes, any infra changes in the last 24–72 hours].

Generate:
1. Most likely root cause hypotheses ranked by probability
2. The fastest diagnostic test for each hypothesis
3. The mitigation action to consider if hypothesis 1 is confirmed
4. What information I should be gathering RIGHT NOW while I still can

Post-mortem prompt:

Post-mortem for incident on [date]:
- What happened (timeline of events)
- Root cause (confirmed)
- Contributing factors
- User impact (duration, scope, data)
- What worked in response
- What did not work

Help me write a blameless post-mortem that:
1. Identifies systemic causes, not individuals
2. Produces action items that are specific, assigned, and time-bounded
3. Separates what we know from what we are inferring
4. Is honest enough to be useful, not sanitized for optics

Follow-up prompts:

Our post-mortem generated eight action items. Help me rank them by impact on future incident prevention.
We have had three incidents in three months with similar patterns. What systemic issue might connect them?
Design the on-call handoff document for this service — what does the incoming engineer need to know that is not in the runbook?

What Great Looks Like

After consistent use, you should notice:

The developers who will be most effective in an AI-augmented engineering environment are not the ones who generate code fastest. They are the ones with the clearest thinking about design tradeoffs, system reliability, and the long-term cost of decisions made in haste.


Your AI toolkit: These prompts work in ChatGPT, Claude, Copilot, Gemini — and in the Alex VS Code extension, which was designed around them. Start with whatever you have. The skill transfers across all of them.

Your First Week Back: Practice Plan

DayTaskTime
Day 1Use the Code Review pattern on code you wrote recently25 min
Day 2Write an ADR for the most recent architecture decision your team made25 min
Day 3Use the Debugging Reasoning pattern on an open puzzle in your codebase25 min
Day 4Run the Testing Strategy pattern on your least-tested critical path25 min
Day 5Review the week’s prompts — save your three best with /saveinsight25 min

Month 2–3: Advanced Applications

Track Your Growth

Architecture Context Archive

Keep a queryable record of system design decisions:

/saveinsight title="ADR: [decision]" insight="Context: [system state]. Options: [A, B, C]. Chose: [chosen option] because [rationale]. Rejected: [other options] because [reasons]. Revisit if: [conditions that would change the decision]." tags="architecture,adr"

On-Call Pattern Library

Capture patterns from incidents to speed future diagnosis:

/saveinsight title="Incident pattern: [symptom]" insight="Symptom: [describe]. Most likely causes: [ranked list]. First diagnostic step: [specific check]. Common false leads: [what looks relevant but usually is not]. Resolution pattern: [what typically fixes it]." tags="on-call,debugging"

Continue your practice: Self-Study Guide — the 30/60/90-day habit guide.

Skills Alex brings to this discipline
code-review testing-strategies vscode-extension-patterns research-first-development root-cause-analysis
Install the Alex extension →
Completed this playbook?

Show the world you've mastered AI for software development. Add your verified certificate of completion to LinkedIn.