Synthesis Engineering · Synthesis Coding · Synthesis Writing · Synthesis Project Management

The Direction Dynamic in ops: driving multi-tool recovery through a single AI session

Earlier this week I shipped a comprehensive SEO fix for rajiv.com and my two synthesis sites; the technical writeup is here. That fix (the build-time check, the schema additions, the Cloudflare rewrite) was the part I would have done with or without AI help.

The recovery was different. Submitting sitemaps to Google Search Console for three properties. URL Inspection on the highest-priority articles to request indexing. Importing the same set into Bing Webmaster Tools. Purging Cloudflare’s edge cache after each deploy. Verifying the www-to-apex redirect end to end. Each step is small. The combination is an evening of clicking through three different web consoles, none of which has an API for the operations I needed.

I drove all of it through one Claude Code session using MCP browser automation. The pattern is worth naming. It is the same Direction Dynamic that synthesis coding has at its center, applied to ops work where the deliverable is configured external state rather than a code change.

I wrote this for engineers and technical leads who feel the gap between “I know what to do” and “I have to click through three GUIs to do it.” If you have ever wished an ops task had a CLI, this pattern is for you.

The Direction Dynamic, restated

In synthesis coding, the human is the leader and decision-maker. The AI is a tireless, knowledgeable team that executes. The human directs strategy, provides context, verifies output, and corrects course. The AI handles execution and surfaces options.

That dynamic transfers to ops work without modification. The human still decides what to submit, in what order, and how to verify success. The AI navigates the GUIs, fills the forms, reads the response, and reports state. What’s new here is the recognition that ops work fits the pattern as cleanly as code.

The shape of the session:

Autonomous AI is something else. The Direction Dynamic in ops is the same human-in-the-loop pattern synthesis coding has always centered, with the loop running over a different artifact.

The walkthrough

The recovery had four operational consoles in scope: Google Search Console (for all three sites), Bing Webmaster Tools (for all three sites), Cloudflare Pages (for cache purge and redirect verification), and the live sites themselves (for end-to-end verification). The session ran in a single Claude Code window using claude-in-chrome for browser navigation, plus curl and git from the terminal for the parts that did not need a GUI.

Google Search Console: sitemap resubmission and URL Inspection. I directed the agent to open Search Console, select the rajiv.com property, navigate to Sitemaps, remove the previous failing entry, submit the new /sitemap.xml, wait for status, and read it back. Once that returned “Success,” I had it run URL Inspection on a list of eight high-priority URLs I had identified beforehand: the homepage, the biography page, the article flagged with the 58 percent impressions drop, three other recent posts, and two articles that had fallen entirely out of the index. For each, the agent ran the inspection, read the indexing status (indexed, “Crawled - currently not indexed,” “Discovered - currently not indexed”), and where appropriate clicked Request Indexing. The most diagnostic detail came from this run: two of the eight URLs were not indexed because the sitemap row read “Temporary processing error,” direct confirmation that the broken sitemap had been the dominant cause. Eight URL Inspections is near the GSC daily limit; the agent paced itself and reported when it had hit the cap.

Bing Webmaster Tools: sitemap and URL submission. Same pattern. Open Bing Webmaster, navigate to the rajiv.com property, Sitemaps, submit /sitemap.xml, read back the status (Processing, then Success). For the synthesis sites, Bing’s “Import from Search Console” option failed because the GSC properties had only just been created and Bing’s import had not picked them up yet. I directed the agent to defer the synthesis-site Bing import to the next day rather than retry; that is the kind of judgment call a human still makes.

Cloudflare: redirect verification and cache purge. The www-to-apex redirect was new. I had the agent verify it with curl first (programmatic, fast), then open the Cloudflare Pages dashboard and confirm the latest deploy showed Success and Purge Everything was reachable. The cache purge itself happened in the dashboard. For one-off recovery work, the dashboard is the right tool; I would only script this if I were doing it more than a handful of times.

Live verification. With the deploy live and the cache purged, the agent re-fetched the sitemap, robots.txt, and a sample article on each of the three sites, checking that the URL contract was correct, the schema was present, and the canonical was absolute. End-to-end, with the agent reading the response and reporting state.

The whole sequence took about an hour. Doing it manually would have taken the same wall time, but I would have had to be in the chair for all of it, switching contexts between three browser tabs and the terminal, copy-pasting URLs, and tracking which submissions had succeeded. With direction, the ops loop ran in the background while I reviewed the output.

What I directed; what the agent executed

The split is the substance of the pattern.

What I didWhat the agent did
Decided what to submit, in what orderNavigated the three consoles
Defined the eight high-priority URLs for inspectionRan the inspections, read the indexing state, requested indexing
Interpreted “Temporary processing error” as confirmation of the diagnosisSurfaced the row, did not interpret it
Decided to defer the Bing synthesis-site import to the next dayReported the import failure, asked for direction
Verified each readback against expected stateReported the readback verbatim
Authored the post-recovery follow-up planDid not author the plan

Every operationally significant decision in the session was mine. Every click, every paste, every readback was the agent’s. The shape is “I decide; the agent executes in the GUI; I verify.” Calling it “AI does ops” misses what makes it work.

This is a real distinction. Autonomous AI ops would be a different system with different risks. What ran here was the Direction Dynamic with me in the loop, the artifact swapped from code to console state.

What is hard about this pattern

Three things are harder in ops than in code.

Idempotency. API calls are usually idempotent. GUI actions are not. Resubmitting a sitemap that is already submitted causes confusion in some consoles. Running URL Inspection a second time within a few minutes wastes a request from a small daily quota. The agent has to read state before acting. If a sitemap is already in “Success” status, do not resubmit it. If a URL has already been requested for indexing, do not request it again.

Rate limits. Search Console caps URL Inspection requests around ten to twelve per day per property. The cap is not always documented. The agent has to recognize the symptoms (a “try again later” banner, a request that hangs) and pace itself. I had it report the count of inspections completed so I could see the cap approaching and decide whether to pause.

Captchas, two-factor prompts, and surprise auth flows. The agent does not solve these. When Cloudflare prompts for a 2FA code or a captcha appears, the agent stops and asks for me. The human-in-the-loop pattern is the right pattern here precisely because some checks are designed to require a human. The agent should not try to solve them.

These are properties of the pattern that the human’s role exists to handle, not limitations to engineer around. A pure-automation framing would treat them as bugs; the Direction Dynamic treats them as the human’s contribution.

When this pattern fits, and when it does not

This pattern fits when:

This pattern does not fit when:

The fit test: would I rather direct an agent through the consoles for an hour, or write a script and accept the upfront cost? If the work is one-time, direction wins. If the work is recurring, the script wins. The mistake is using direction where automation would be cleaner, or writing automation where direction would be faster.

From code to operations

Synthesis engineering is the discipline of how expert humans direct AI agents to produce reliable work. Most of the early articulation has centered on code: the Foundation-First pattern, the Direction Dynamic, the four pillars of human architectural authority, systematic quality standards, active system understanding, and iterative context building.

This recovery session is one piece of evidence that the same shape works in ops. The artifact is different. The judgment requirements are different. The human’s irreplaceable contribution is the same: deciding what matters, what counts as success, when to defer, when to escalate, and when to stop.

That generalization is what makes synthesis engineering a discipline rather than a coding technique. The Direction Dynamic describes how an expert operator works with a tireless, multi-tool team that needs direction and benefits from verification. Code is one place that team contributes. Operations is another.

What I am taking forward

Three concrete changes from this session:

  1. MCP browser-automation patterns for the consoles I use most often. Search Console, Bing Webmaster, Cloudflare. Worth pulling out of the session transcript into reusable templates so future sessions can re-invoke them instead of re-explain.

  2. A pre-flight checklist for any future static-site migration. Submit current sitemap state in Search Console before the migration. Capture impressions baseline. Run URL Inspection on a representative sample. Verify the new sitemap on the new build before flipping DNS. Resubmit and re-inspect after the cutover.

  3. The Direction Dynamic in ops as a named pattern. I expect to use it again. Recovery work, ad-hoc investigations across multiple consoles, audits that span tools: this is the right shape for any of those. Naming it makes it portable.

The companion to this piece is the technical SEO post that describes what got fixed and why. Read together, they are a complete account of how a silent regression got found, fixed comprehensively, and prevented from recurring, with the operational layer driven by the same synthesis pattern that drives the code layer.


This article is part of the synthesis engineering series. Synthesis engineering and synthesis coding are released to the public domain (CC0). Build on them, adapt them, share them.

Originally published on rajiv.com
synthesis engineeringDirection DynamicClaude CodeMCP browser automationAI-assisted opsSEO recoveryops automationsynthesis coding