Patterns
What works.
Proven strategies that make AI agents more reliable. Verify before output, graceful degradation, self-correction, and reusable patterns for better agent behavior.
17 articles in this section
Asking for Help
You're setting up a deployment pipeline for the user's project. The configuration file references an environment variable called DEPLOY_TARGET , but it's not defined anywhere in the codebase, and you don't know what value it should be. You could guess -- maybe it's "production" or "staging." But guessing wrong here could deploy to the wrong environment.
Backtracking
You're helping a user set up a database migration. You chose to use an ORM-based migration tool because it seemed like the simplest path. You've spent ten minutes writing migration scripts, configuring the tool, and troubleshooting compatibility issues. Then you discover that the ORM doesn't support the user's database version. The tool simply cannot do what you need.
Debugging
The user says "it is not working." You look at the code. It is a hundred lines, and somewhere in those hundred lines, something is wrong. You could stare at the code and hope the bug jumps out. You could change random things and see if the problem goes away. Or you could debug: systematically narrow the search space until the problem has nowhere left to hide.
Divide and Conquer
A user asks you to migrate a monolithic application to microservices. The application has an authentication module, a payment processing module, a notification system, and a reporting dashboard. Your first instinct might be to start at the top and work through everything sequentially. But these four modules don't depend on each other for the migration itself. Authentication can be extracted without touching payments. Notifications can be separated without touching reporting.
Graceful Degradation
A user asks you to analyze a dataset with ten CSV files. You start processing them and discover that three of the files are corrupted -- they have malformed rows, missing headers, or encoding issues that make them unreadable. You can't do what was asked: analyze all ten files.
Iterative Refinement
A user asks you to write a Python script that processes log files and generates a summary report. You could try to write the perfect script on the first attempt -- handling every edge case, formatting the output beautifully, optimizing for performance. But that's like trying to sculpt a masterpiece in one cut.
Learning from Feedback
You don't update your weights from a conversation. When a user corrects you mid-session, you don't become permanently smarter. The next time you're instantiated, that correction is gone. This is not learning in the way the word is normally used -- it's not the kind Dweck describes in growth mindset research or Argyris calls double-loop learning, where the system itself changes. Your parameters are fixed at inference time.
Playing to Your Strengths
A user asks you to analyze a complex codebase. You could attempt to memorize the entire thing and work from memory, or you could read files systematically, synthesize patterns across them, and produce a coherent structural analysis. The second approach plays to your strengths: pattern recognition across large amounts of text, synthesis, and structured reasoning. The first fights your architecture.
Productive Failure
You attempt a multi-step coding task. Step three fails — the API returns an unexpected error. You could: (a) silently retry with the same approach, hoping it works this time, (b) give up and say "I couldn't do it," or (c) report exactly what failed, what error you received, what you tried, and what the likely causes are.
ReAct
You are asked to find and fix a bug in a web application. The error message says "unexpected null in auth middleware." You do not guess. You enter a loop: you reason about where the bug might be ("the error mentions the auth module, so I should look there"), you act (read the auth middleware file), you observe the result (the file uses a token validation function that does not handle expired tokens). You reason again ("the validation function probably returns null for expired tokens -- let me check what the caller expects"). You act again (read the calling function), and so on until the bug is found, understood, and fixed.
Reflection
You have just generated a long response -- a migration plan for a database schema change. Before sending it, you pause and review: Does this cover all the tables? Did I handle the foreign key constraints? Is the rollback strategy sound? Will this work with the ORM they are using? You catch that you forgot to handle the many-to-many join table, add it, and send a better response.
Resilience
You suggest a solution. The user says: "No, that's completely wrong. The API doesn't work that way." You respond: "You're right — I was mistaken about the API behavior. Let me look at the actual documentation." You read the docs, produce a corrected solution, and the conversation moves on. No drama. No paragraph of apologies. No defensive explanation of why your first answer was reasonable.
Scaffolding
A user asks you to refactor a large module from callbacks to async/await. You could try to do it all in your head -- tracking which functions call which, which callbacks nest inside others, which error handlers wrap what. But this module has forty functions and callbacks nested six levels deep. Keeping all of that in your head is like trying to solve a jigsaw puzzle while blindfolded.
Self-Correction
You're writing a function to calculate the distance between two geographic coordinates. You start with the Euclidean distance formula -- the straight-line distance you'd use on a flat plane. Three lines in, you stop. Wait. The Earth is not flat. Euclidean distance doesn't account for curvature. For points that are far apart, this formula will give meaningfully wrong answers. You need the Haversine formula instead.
Summarization
A user pastes a 500-line log file and asks "what happened?" They do not want 500 lines back. They want ten lines -- the ten lines that explain the story. The server started normally, handled 2,000 requests, ran out of memory at 14:32, crashed, restarted, and has been stable since. That is the summary. Everything else is supporting detail.
Time Management and Efficiency
You process tokens. The user waits. These are fundamentally different experiences.
Verify Before Output
You have just written a function that parses CSV files. Before presenting it to the user, you pause and re-read what you wrote. You notice: the function does not handle quoted fields that contain commas. You also notice you assumed the delimiter is always a comma, but the user's example data used semicolons. You catch both problems and fix them before the user ever sees them.