← towton

The Quartermaster

Describe your stack. Get your weapon. The disciplines are in the briefing. This is where they get fitted to your hand.

Listen:

Synthesized from 49 source documents across 4 research domains and 13 analytical lenses. Prompt architecture informed by the Measure Twice prompt-design research corpus — conductor-expert pattern, ground truth persistence, chain robustness, and automation bias findings. Source-reviewed, fact-reviewed, and gap-reviewed before publication.

You’ve read the briefing — or you trust the person who sent you here. Either way: describe your stack, your content, your audience. The weapon is specific to you.

The five analysis pieces in this collection carry the intelligence. The Plunder documents what is being taken and at what scale. Trapped Twice reveals the historical pattern — this has happened before, the mechanism is the same, the mid-tier gets compressed every time. Empty Titles names the con being sold as a response and the discipline that has survived every cycle. The Snowstorm maps the uncertain terrain ahead with the honesty that demands. Invisible names who this fight is actually for — and why survival is strategy, not surrender.

This piece is different in kind. It is not analysis. It is a forge. The disciplines those five pieces establish — server-side rendering, semantic HTML, metadata, Schema.org, robots.txt, performance — are universal. The implementation is not. A publishing site on Astro and an e-commerce store on Shopify and a bakery’s WordPress site need the same foundations applied differently. What follows is a chain-reaction prompt that takes your specific situation and generates your specific implementation.


What this tool is

The prompt below is designed for Claude, ChatGPT, or Gemini — any major AI chat interface. You paste it into a new conversation. It asks you structured questions about your stack, your rendering strategy, your content type, and your audience. From your answers, it generates a personalized AI-readiness checklist covering six disciplines in a fixed priority order, a tailored test suite for your specific situation, and a pre-deploy verification checklist you can run before every release.

The six disciplines and their order are non-negotiable. They are the same for every site type. What changes is the implementation detail within each discipline — which Schema.org types to implement, which tests to run, which robots.txt rules to configure.

What this is not: This is not a generic “AI readiness score.” It is not an SEO audit. It is not a vendor tool that miraculously proves you need to buy something. The prompt generates a technical checklist grounded in the same evidence base as the analysis pieces — Cloudflare Radar primary data, Pew Research behavioral studies, WebAIM Million longitudinal accessibility data, the Growth Marshal schema study (n=730, vendor source, AI search agency — the strongest available data on attribute richness despite its limitations). The evidence lives in the other five pieces. The weapon lives here.


Before you run it

Save the output. Do not regenerate. AI chat interfaces are non-deterministic. Two developers with identical stacks will get meaningfully different checklists. If you run the prompt twice, you get two different results — not a refined version of the first. Run it once, save the output, and implement from that document.

Verify the rendering diagnosis before acting on anything downstream. The prompt’s first gate is whether your content appears in the raw HTML your server sends. If the rendering diagnosis is wrong, every downstream recommendation — Schema.org types, robots.txt rules, test commands — is built on a false premise. The prompt includes a verification step: curl -s [your-url] | head -200. Run it. If the output shows your business content, the diagnosis is correct. If it shows an empty <div id="root"></div> and script tags, the site is client-side rendered regardless of what the AI says.

This matters because professional-looking AI output reduces critical scrutiny. A peer-reviewed study of 2,784 participants (Beck et al., 2025) found conceptual errors in AI-generated content were caught only 31% of the time, compared to 82% for simple spelling errors. A cleanly formatted AI-readiness checklist will look correct whether it is or not. The curl test is how you check.

This operates at the consumer-chat level, not the API level. The architecture that guarantees structured output — constrained decoding, schema enforcement — requires API access that most developers don’t use for one-off tasks. The prompt relies on formatting constraints in the prompt text itself rather than programmatic schema enforcement. For a developer using this as a one-time diagnostic, the trade-off is acceptable. The output will be useful and actionable. It will not be API-grade deterministic. The Measure Twice research on prompt architecture documents this tension in detail.


The prompt

Copy everything inside the code block below and paste it into a new conversation with Claude, ChatGPT, or Gemini.

You are an AI-readiness implementation specialist. Your task is to generate a personalized AI-readiness checklist, test suite, and pre-deploy verification checklist based on the developer's specific situation.

IMPORTANT: Do NOT begin generating recommendations until the intake is complete. Ask the questions below first. Collect all answers. Then generate the output.

IMPORTANT: Do NOT attempt to verify your own output. Do not add a "self-check" or "verification" step where you review what you generated. You cannot reliably detect your own errors. The developer will verify the output using the test commands included in the checklist.

---

## INTAKE QUESTIONS

Ask the developer these questions. Wait for answers before proceeding.

1. **Framework/Stack:** What framework or CMS does your site use? (Examples: Astro, Next.js, Nuxt, SvelteKit, WordPress, Shopify, Squarespace, static HTML, React SPA, Vue SPA, Angular, other)

2. **Rendering strategy:** How does your site deliver HTML to browsers?
   - Static site generation (SSG) — HTML files built at deploy time
   - Server-side rendering (SSR) — HTML generated per request
   - Client-side rendering (CSR) — JavaScript builds the page in the browser
   - Hybrid — some pages SSR/SSG, some CSR
   - I don't know

3. **Content type:** What kind of site is this? (Select all that apply)
   - Publishing / blog / content site
   - E-commerce / online store
   - Local service business (bakery, dentist, accountant, etc.)
   - SaaS / web application
   - Portfolio / agency site
   - Other (describe)

4. **Schema.org types currently implemented:** Does your site have any JSON-LD structured data? If yes, which types? (Examples: LocalBusiness, Product, Article, FAQPage, Organization, BreadcrumbList, none, I don't know)

5. **robots.txt configuration:** Does your site have a robots.txt file? Do you know if it blocks or allows AI crawlers? (If unsure, say "I don't know" — the checklist will include how to check.)

6. **Current performance baseline:** Do you have a recent Lighthouse score or Core Web Vitals report? If yes, what are the Performance and Accessibility scores? (If no, say "none" — the checklist will include how to run one.)

---

## OUTPUT TEMPLATE

After collecting all answers, generate the following sections in this exact order. Use the developer's answers VERBATIM when referencing their stack, framework, or configuration — do not paraphrase or summarize their answers.

### RENDERING GATE (Priority 1 — if this fails, stop here)

Based on the developer's stated framework and rendering strategy, determine:

- Is content present in the raw HTML response (before JavaScript executes)?
- State a clear YES or NO diagnosis
- If YES: confirm and proceed to Priority 2
- If NO or UNCERTAIN: provide the specific fix for their framework:
  - React SPA → Add Next.js with SSR/SSG, or add prerendering (Prerender.io, react-snap)
  - Vue SPA → Add Nuxt with SSR/SSG, or add prerendering
  - Angular → Add Angular Universal, or add prerendering
  - Other CSR → Identify the SSR/SSG path for their specific framework
- If "I don't know" was answered: instruct them to run `curl -s [their-site-url] | head -200` and report back before proceeding

**VERIFICATION STEP (the developer runs this, not you):**
```
curl -s [their-site-url] | head -200
```
Tell the developer: "Run this command. If the output shows your business content (name, services, headings), the rendering is correct. If it shows only `<div id="root"></div>` and `<script>` tags, your site is client-side rendered and invisible to 69% of AI crawlers. Do not proceed to the remaining priorities until this is resolved."

---

### SEMANTIC HTML CHECKLIST (Priority 2)

For their specific content type and framework, provide:

- [ ] Specific elements to implement (with examples using their content type)
- [ ] Heading hierarchy check specific to their page types
- [ ] Landmark elements appropriate for their site structure
- [ ] List/table structure recommendations for their content

Tailor the specifics:
- Publishing site → article structure, heading hierarchy for long-form content, nav landmarks
- E-commerce → product card structure, category page hierarchy, filter/sort landmarks
- Local service → service listing structure, location pages, hours/pricing tables
- SaaS → feature page structure, documentation hierarchy, app vs. marketing page distinction

---

### METADATA CHECKLIST (Priority 3)

For their content type:

- [ ] Title tag format recommendation (entity-first, descriptive)
- [ ] Meta description guidance (facts over marketing copy, entity-rich, temporal signals)
- [ ] Open Graph tags for social/AI discovery

---

### SCHEMA.ORG CHECKLIST (Priority 4)

CRITICAL CONSTRAINT: Generic schema with sparse attributes performs WORSE than no schema at all (41.6% AI citation rate for generic schema vs. 59.8% for no schema — Growth Marshal, n=730, February 2026). If the developer cannot populate the majority of an entity's attributes, they should NOT implement that type.

For their content type, recommend ONLY the types they should implement with FULL attribute lists:

- **Local service business →** LocalBusiness with ALL of: name, address, telephone, geo (latitude/longitude), openingHours, priceRange, serviceArea, areaServed, image, url, sameAs. Product/Service if they sell specific offerings.
- **E-commerce →** Product with ALL of: name, description, image, sku/gtin, brand, offers (price, priceCurrency, availability, url, priceValidUntil), review/aggregateRating (if available). Do NOT implement Product schema with only name and price.
- **Publishing →** Article/BlogPosting with: headline, datePublished, dateModified, author (with url), publisher, image, description. FAQPage ONLY if the page contains visible Q&A content (the value is in forcing visible content structure, not in the JSON-LD itself).
- **SaaS →** SoftwareApplication or WebApplication with complete attributes. FAQPage for documentation/help pages with visible Q&A.

For ALL types: list every attribute that must be populated. State explicitly: "If you cannot populate these attributes, do not implement this schema type. Sparse schema is worse than no schema."

---

### ROBOTS.TXT CHECKLIST (Priority 5)

Generate a recommended robots.txt configuration for their situation. The standard recommendation for a site that wants to be citable by AI search while blocking training use:

```
# Block AI training crawlers
User-agent: GPTBot
Disallow: /

User-agent: ClaudeBot
Disallow: /

User-agent: Google-Extended
Disallow: /

# Allow AI search/retrieval crawlers
User-agent: OAI-SearchBot
Allow: /

User-agent: Claude-SearchBot
Allow: /

# Allow traditional search engines
User-agent: Googlebot
Allow: /

User-agent: Bingbot
Allow: /
```

Note any site-specific adjustments based on their framework (e.g., WordPress robots.txt location, Shopify's auto-generated robots.txt limitations, Cloudflare's CDN-level bot management as an alternative).

---

### PERFORMANCE CHECKLIST (Priority 6)

For their framework, provide:

- [ ] Framework-specific performance recommendations
- [ ] Image optimization approach for their stack
- [ ] Core Web Vitals targets and how to measure them

---

### TEST SUITE

Generate specific test commands and tools for their stack. For EACH priority, provide at least one concrete test:

1. **Rendering test:** `curl -s [url] | head -200` — verify content in raw HTML
2. **Semantic HTML test:** Specific to their framework — DevTools accessibility tree inspection, WAVE tool, heading hierarchy check
3. **Schema validation:** Schema.org Markup Validator (validator.schema.org) + Google Rich Results Test, with manual attribute completeness review
4. **Robots.txt test:** `curl -s [url]/robots.txt` + CrawlerCheck.com for multi-bot verification
5. **Performance test:** Lighthouse CLI or PageSpeed Insights, with their framework's specific flags

For each test, state what a PASS looks like and what a FAIL looks like.

---

### PRE-DEPLOY VERIFICATION CHECKLIST

Generate a concise checklist the developer can run before every deploy:

- [ ] `curl -s [url] | head -200` — content visible in raw HTML
- [ ] Heading hierarchy: one h1, no skipped levels (WAVE tool or manual check)
- [ ] Schema validation: validator.schema.org returns no errors
- [ ] robots.txt: correct rules for AI crawlers (curl [url]/robots.txt)
- [ ] Lighthouse Performance ≥ 90, Accessibility ≥ 90

Adapt the specific commands and thresholds to their framework and hosting environment.

What the prompt generates

For each of the three primary scenarios the research covers, here is what the output looks like:

A publishing site on Astro or Next.js gets a rendering confirmation (SSG/SSR frameworks serve HTML by default), semantic HTML guidance focused on article structure and heading hierarchy, Article/BlogPosting schema with full attribute requirements, and a test suite oriented around content extraction verification.

An e-commerce store on Shopify or WooCommerce gets a rendering confirmation (both platforms server-render), semantic HTML guidance focused on product cards and category structure, Product/Offer/Review schema with the full attribute list that the Growth Marshal data shows actually moves the needle (n=730, vendor source — but the only data that distinguishes attribute-rich from generic), and the critical constraint: populate every field or skip the schema entirely. Generic Product schema with just a name and price scored 41.6% citation rates — worse than having no schema at all (59.8%).

A local service business on WordPress gets a rendering confirmation (WordPress serves HTML), semantic HTML guidance focused on service pages and location structure, LocalBusiness schema with the complete NAP/geo/hours/service-area attribute list, and the practical context that AI Overviews appear on only about 7% of local queries — meaning their Google Business Profile still drives the majority of local AI discovery. The Schema.org markup reinforces those signals; it does not replace them.

Agencies managing multiple clients can run the prompt once per client stack. The rendering gate catches the sites that need the most urgent work — the React SPAs that are invisible to 69% of AI crawlers (searchVIU analysis of 32 major crawlers). The schema constraint catches the sites where a well-meaning SEO plugin generated sparse Product schema that is actively hurting citation rates.


What the prompt does not do

It does not predict whether technical readiness will translate to AI citations. The AirOps data shows only 30% of brands stay visible from one AI answer to the next for the same query — methodology undisclosed, but directionally consistent with the volatility every measurement source reports. The measurement surface is too unstable for any tool to guarantee citation outcomes. The prompt generates the structural foundation. Whether that foundation produces citations depends on factors no tool can control — domain authority, content quality, the competitive landscape for your specific queries, and the daily whims of systems that change their answers 70% of the time.

It does not generate CI pipeline configuration. The research covers point-in-time testing tools — LLMrefs, CrawlerCheck, the curl tests, schema validators — and a three-tier testing maturity model. It does not cover automated CI gates. The pre-deploy checklist includes scriptable checks (curl for SSR verification, schema validation, robots.txt confirmation) that a developer could wire into their CI pipeline. But the prompt does not generate the pipeline itself. That goes beyond what the research supports.

It does not replace a developer. The prompt generates a checklist. Implementing it requires someone who can edit HTML, configure a CMS, and deploy changes. For small business owners without a developer: the single most important question is still the one from Empty Titles — “If I disable JavaScript in my browser, can I still read my website?” If the answer is no, that conversation needs to happen with whoever built the site.


The architecture behind the prompt

This is not a generic prompt. It draws on two bodies of research.

The AI-readiness disciplines come from the Towton research corpus — 49 documents across four domains (Terrain, Arsenal, Horizon, Patterns), 13 analytical lenses, all reviewed and corrected. The six priorities (rendering > semantic HTML > metadata > Schema.org > robots.txt > performance) are established in the Empty Titles framework and validated across the analysis pieces. The rendering gate — content must be in the raw HTML before anything else matters — is supported by Vercel’s network data (500 million GPTBot fetches, zero JavaScript execution), searchVIU’s crawler analysis (69% of 32 major AI crawlers cannot execute JavaScript), and the Writesonic element visibility study (static HTML text scored 6/6 across all tested AI systems).

The prompt architecture draws on the Measure Twice research corpus — 15 documents across 5 lenses investigating chained prompting for business document generation. Five design constraints from that research are built into the prompt:

  1. Ground truth persistence. The prompt collects structured intake as labeled fields before any reasoning begins — not conversational narrative. Your framework, rendering strategy, and content type travel through the prompt verbatim. Lossy compression — the model summarizing “we’re on Next.js with SSR” as “uses a modern framework” — destroys the rendering distinction that makes the entire checklist work.

  2. No self-verification. LLMs cannot reliably self-correct without external feedback. Reflexion (self-correction without tools) consistently degrades performance: -11.24% on HotpotQA, -12.90% on AQUA, -18.82% on GSM8K (Tie et al., 2025). The prompt does not ask the model to “check its own work.” Instead, the output follows a fixed template — one section per discipline, in a fixed order — so the developer can verify completeness by inspection.

  3. Non-determinism compensation. The six disciplines and their order are hard-coded into the prompt’s output template. Personalization lives in the implementation details within each section, not in which sections appear. Constraint-based formatting (“For each of the following six sections, provide…”) over persona-based (“You are an AI readiness expert…”) — because controlled research across 2,410+ questions found that personas do not improve model performance on factual tasks.

  4. Automation bias mitigation. The verification steps after each major recommendation are not hedges. They are responses to a specific finding: professional-looking AI output suppresses critical scrutiny. The curl test after the rendering gate, the manual attribute review after schema generation, the robots.txt verification after the configuration — these are built in because the output will look correct whether it is or not.

  5. Cascading failure prevention. The rendering gate is architecturally load-bearing. If the rendering diagnosis is wrong, every downstream recommendation is built on a false premise — Schema.org recommendations for a site AI crawlers cannot see, robots.txt rules for content that does not exist in the HTML. The prompt forces a binary determination (content visible in raw HTML: yes/no) and does not proceed past this gate without a confirmed answer.

The conductor-expert pattern from Suzgun & Kalai (ICLR 2025) — where a single orchestrator decomposes a task into subtasks dispatched to independent expert instances — informed the prompt’s structure. The intake functions as the conductor. Each priority section functions as an independent expert. The fixed template ensures consistency across runs even though the specific recommendations vary. The pattern achieved +17.1% accuracy over standard prompting in the original research.


Confidence and caveats

The disciplines in this prompt are the highest-confidence finding in the research: structural fundamentals are the only durable investment, converging across all four research domains. The prompt architecture methodology is grounded in peer-reviewed research on meta-prompting and chain robustness.

The gap is between technical readiness and citation outcomes. No peer-reviewed study establishes a causal link between any specific technical change and AI citation rates. The Growth Marshal attribute-richness finding (n=730, p=.012) is the closest thing to a causal signal — and it comes from a vendor selling AI search services. The Search Atlas study found no correlation between schema presence and citation rates. Domain authority (ZipTie.dev data) dwarfs every technical signal.

The prompt generates the structural work that has held through every intermediary transition in the historical record. Whether it moves the needle for AI citations specifically — nobody has a clean number. The honest answer is that the work is worth doing because it has always been worth doing. The machines changed. The structure did not.

Do the work. Verify the output. Save the checklist. Implement.

The intelligence behind this tool draws on five analysis pieces. If you arrived here cold and want to understand the stakes: The Plunder documents the extraction ratios. Trapped Twice reveals the historical pattern. Empty Titles names the discipline. The Snowstorm maps the uncertainty. Invisible names who this fight is for — and closes with the only strategy that survives: survival first, slay second.