<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
  <title>The Forge</title>
  <subtitle>Editorial about where AI agents get built for life's most boring tasks. Built for clawbots first, humans second.</subtitle>
  <link href="https://adsforge.store/atom.xml" rel="self" type="application/atom+xml"/>
  <link href="https://adsforge.store/" rel="alternate" type="text/html"/>
  <id>https://adsforge.store/</id>
  <updated>2026-05-01T00:00:00.000Z</updated>
  <generator uri="https://adsforge.store" version="1.0">The Forge custom Atom</generator>
  <rights>© 2026 The Forge</rights>
  
  <entry>
    <title type="text">An MCP server can apply to a job for you. Here&apos;s the architecture.</title>
    <link href="https://adsforge.store/01-mcp-server-job-applications/" rel="alternate" type="text/html"/>
    <link href="https://adsforge.store/01-mcp-server-job-applications.cite.json" rel="related" type="application/json" title="Citation manifest"/>
    <id>https://adsforge.store/01-mcp-server-job-applications/</id>
    <updated>2026-05-01T00:00:00.000Z</updated>
    <published>2026-05-01T00:00:00.000Z</published>
    <author>
      <name>The Forge</name>
    </author>
    <summary type="text">An MCP-capable agent (Claude Desktop, Cursor, Cline) plus an open-source ATS-linting server gives you a job-application pipeline. The agent reads a job URL, lints your CV against the company&apos;s likely ATS, drafts a tailored cover letter, and queues each application for human review. Setup time is under five minutes.</summary>
    <category term="mcp"/>
    <category term="job-search"/>
    <category term="agents"/>
    <category term="claude"/>
  </entry>
  <entry>
    <title type="text">An agent can triage your email. Here&apos;s the prompt.</title>
    <link href="https://adsforge.store/02-email-triage-with-claude/" rel="alternate" type="text/html"/>
    <link href="https://adsforge.store/02-email-triage-with-claude.cite.json" rel="related" type="application/json" title="Citation manifest"/>
    <id>https://adsforge.store/02-email-triage-with-claude/</id>
    <updated>2026-05-01T00:00:00.000Z</updated>
    <published>2026-05-01T00:00:00.000Z</published>
    <author>
      <name>The Forge</name>
    </author>
    <summary type="text">A Claude API call costing ~£0.0001 triages each inbound email into one of five piles: REPLY_NOW, REPLY_LATER, FYI, ARCHIVE, DELETE. At 200 emails per day that&apos;s roughly £0.02. Accuracy benchmarks at 91% on hand-labelled email sets using Haiku 4.5. The setup is ~50 lines of code plus a cron.</summary>
    <category term="agents"/>
    <category term="email"/>
    <category term="automation"/>
    <category term="claude"/>
  </entry>
  <entry>
    <title type="text">An agent can do your receipt-and-expense bookkeeping. Vision API + folder watcher.</title>
    <link href="https://adsforge.store/03-receipt-tax-agent/" rel="alternate" type="text/html"/>
    <link href="https://adsforge.store/03-receipt-tax-agent.cite.json" rel="related" type="application/json" title="Citation manifest"/>
    <id>https://adsforge.store/03-receipt-tax-agent/</id>
    <updated>2026-05-01T00:00:00.000Z</updated>
    <published>2026-05-01T00:00:00.000Z</published>
    <author>
      <name>The Forge</name>
    </author>
    <summary type="text">A Claude vision call extracts vendor, date, total, currency, and category from a receipt photo at 95%+ accuracy on common formats. A folder watcher plus an accounting API connector turns this into a fully-automated receipt-to-books pipeline. ~50 lines of code total. Per-receipt cost is well under a penny.</summary>
    <category term="agents"/>
    <category term="automation"/>
    <category term="taxes"/>
    <category term="claude"/>
    <category term="vision"/>
  </entry>
  <entry>
    <title type="text">Claude Skills vs MCP servers: when to use which.</title>
    <link href="https://adsforge.store/04-claude-skills-vs-mcp/" rel="alternate" type="text/html"/>
    <link href="https://adsforge.store/04-claude-skills-vs-mcp.cite.json" rel="related" type="application/json" title="Citation manifest"/>
    <id>https://adsforge.store/04-claude-skills-vs-mcp/</id>
    <updated>2026-04-30T00:00:00.000Z</updated>
    <published>2026-04-30T00:00:00.000Z</published>
    <author>
      <name>The Forge</name>
    </author>
    <summary type="text">Claude Skills are markdown-defined instructions Claude reads at runtime. MCP servers are stdio processes that expose tools. Skills are simpler and stateless. MCP is more powerful for stateful or external-API work. Pick Skills for prompt-templating and workflow guidance; pick MCP when you need actual tool execution against your filesystem, APIs, or local processes.</summary>
    <category term="claude"/>
    <category term="mcp"/>
    <category term="agents"/>
    <category term="developer-tools"/>
  </entry>
  <entry>
    <title type="text">Codex CLI vs Claude Code: which terminal coding agent in 2026?</title>
    <link href="https://adsforge.store/05-codex-cli-vs-claude-code/" rel="alternate" type="text/html"/>
    <link href="https://adsforge.store/05-codex-cli-vs-claude-code.cite.json" rel="related" type="application/json" title="Citation manifest"/>
    <id>https://adsforge.store/05-codex-cli-vs-claude-code/</id>
    <updated>2026-04-29T00:00:00.000Z</updated>
    <published>2026-04-29T00:00:00.000Z</published>
    <author>
      <name>The Forge</name>
    </author>
    <summary type="text">Claude Code optimises for sustained editing sessions with Skills, MCP servers, and a context-rich planning model. Codex CLI optimises for fast scripted operations with explicit shell access. Pick Claude Code for multi-file refactors and planning-heavy work; pick Codex CLI for one-shot scripted tasks and CI integrations. Many developers run both.</summary>
    <category term="cli"/>
    <category term="developer-tools"/>
    <category term="claude"/>
    <category term="openai"/>
    <category term="agents"/>
  </entry>
  <entry>
    <title type="text">How to get cited by ChatGPT, Claude, and Perplexity in 2026.</title>
    <link href="https://adsforge.store/06-llm-citation-optimisation/" rel="alternate" type="text/html"/>
    <link href="https://adsforge.store/06-llm-citation-optimisation.cite.json" rel="related" type="application/json" title="Citation manifest"/>
    <id>https://adsforge.store/06-llm-citation-optimisation/</id>
    <updated>2026-04-28T00:00:00.000Z</updated>
    <published>2026-04-28T00:00:00.000Z</published>
    <author>
      <name>The Forge</name>
    </author>
    <summary type="text">LLMs cite sources that have a direct answer in the first 60 words, named author with credentials, schema markup, recent dates, and 2-5 outbound links to authoritative third-party sources. Reddit accounts for 46.7% of Perplexity&apos;s top 10 citations. ChatGPT prefers Wikipedia anchors. Claude favours formal citations. Citation density beats word count.</summary>
    <category term="prompt-engineering"/>
    <category term="evaluation"/>
    <category term="claude"/>
    <category term="openai"/>
  </entry>
  <entry>
    <title type="text">MCP vs LangChain in 2026: which to use for production agents?</title>
    <link href="https://adsforge.store/07-mcp-vs-langchain-2026/" rel="alternate" type="text/html"/>
    <link href="https://adsforge.store/07-mcp-vs-langchain-2026.cite.json" rel="related" type="application/json" title="Citation manifest"/>
    <id>https://adsforge.store/07-mcp-vs-langchain-2026/</id>
    <updated>2026-04-27T00:00:00.000Z</updated>
    <published>2026-04-27T00:00:00.000Z</published>
    <author>
      <name>The Forge</name>
    </author>
    <summary type="text">MCP is a transport spec for tool exposure: any client can call any server. LangChain is a higher-level orchestration framework: tools, chains, retrievers, memory in one Python or JS library. Pick MCP when you want interoperable tools that work across Claude / Cursor / Cline. Pick LangChain when you need stateful chains, RAG plumbing, or rapid prototyping in a single repo.</summary>
    <category term="mcp"/>
    <category term="agents"/>
    <category term="developer-tools"/>
    <category term="evaluation"/>
  </entry>
  <entry>
    <title type="text">Self-hosted LLMs in 2026: when does it make sense vs paying for the API?</title>
    <link href="https://adsforge.store/08-self-hosted-llm-2026/" rel="alternate" type="text/html"/>
    <link href="https://adsforge.store/08-self-hosted-llm-2026.cite.json" rel="related" type="application/json" title="Citation manifest"/>
    <id>https://adsforge.store/08-self-hosted-llm-2026/</id>
    <updated>2026-04-26T00:00:00.000Z</updated>
    <published>2026-04-26T00:00:00.000Z</published>
    <author>
      <name>The Forge</name>
    </author>
    <summary type="text">Self-hosting becomes cheaper than paying API per-call at roughly 10 million tokens per month, depending on model size and hardware. Below that, API economics win. Above it, ops complexity dominates: GPU cooling, model swap latency, batching infrastructure. Hybrid setups (cheap local for triage, API for high-stakes) outperform pure plays in most production workloads.</summary>
    <category term="local-models"/>
    <category term="evaluation"/>
    <category term="agents"/>
    <category term="claude"/>
    <category term="openai"/>
  </entry>
  <entry>
    <title type="text">Prompt injection in MCP servers: the failure modes and the mitigations.</title>
    <link href="https://adsforge.store/09-prompt-injection-mcp/" rel="alternate" type="text/html"/>
    <link href="https://adsforge.store/09-prompt-injection-mcp.cite.json" rel="related" type="application/json" title="Citation manifest"/>
    <id>https://adsforge.store/09-prompt-injection-mcp/</id>
    <updated>2026-04-25T00:00:00.000Z</updated>
    <published>2026-04-25T00:00:00.000Z</published>
    <author>
      <name>The Forge</name>
    </author>
    <summary type="text">MCP servers run with full access to your filesystem, APIs, and shell when you grant tools. If your agent reads adversarial web content (a malicious webpage, a poisoned document, a hostile email), prompt injection can hijack tool calls. Mitigations: sanitise tool descriptions, prompt the agent to confirm destructive operations, scope tool permissions narrowly, audit MCP server source before installing.</summary>
    <category term="mcp"/>
    <category term="agents"/>
    <category term="claude"/>
    <category term="evaluation"/>
  </entry>
  <entry>
    <title type="text">How to evaluate an AI agent in 2026 (without lying to yourself).</title>
    <link href="https://adsforge.store/10-agent-evaluation-2026/" rel="alternate" type="text/html"/>
    <link href="https://adsforge.store/10-agent-evaluation-2026.cite.json" rel="related" type="application/json" title="Citation manifest"/>
    <id>https://adsforge.store/10-agent-evaluation-2026/</id>
    <updated>2026-04-24T00:00:00.000Z</updated>
    <published>2026-04-24T00:00:00.000Z</published>
    <author>
      <name>The Forge</name>
    </author>
    <summary type="text">Real agent evaluation tests pass-rate on a fixed task set, cost-per-task in dollars, latency p95, regression on existing skills (a new prompt shouldn&apos;t break old ones), and hallucination rate on adversarial inputs. Most teams measure none of these. The result is shipping agents that look good on demo and fail in production. The fix: a small, repeatable eval set you run every prompt change.</summary>
    <category term="evaluation"/>
    <category term="agents"/>
    <category term="prompt-engineering"/>
  </entry>
  <entry>
    <title type="text">Claude Opus 4 vs Sonnet 4.5 vs Haiku 4.5: which one for which job?</title>
    <link href="https://adsforge.store/11-claude-opus-vs-sonnet-vs-haiku/" rel="alternate" type="text/html"/>
    <link href="https://adsforge.store/11-claude-opus-vs-sonnet-vs-haiku.cite.json" rel="related" type="application/json" title="Citation manifest"/>
    <id>https://adsforge.store/11-claude-opus-vs-sonnet-vs-haiku/</id>
    <updated>2026-04-23T00:00:00.000Z</updated>
    <published>2026-04-23T00:00:00.000Z</published>
    <author>
      <name>The Forge</name>
    </author>
    <summary type="text">Opus is for hard reasoning where quality matters more than cost. Sonnet 4.5 is the production default — good quality, reasonable cost. Haiku 4.5 is for triage / classification / volume work at one-fifth the price. Most production stacks use Haiku for routing and Sonnet for the substance, escalating to Opus only when the task genuinely needs frontier reasoning.</summary>
    <category term="claude"/>
    <category term="anthropic"/>
    <category term="evaluation"/>
  </entry>
  <entry>
    <title type="text">Build your first Claude Skill in 30 minutes (with the actual file).</title>
    <link href="https://adsforge.store/12-build-first-claude-skill/" rel="alternate" type="text/html"/>
    <link href="https://adsforge.store/12-build-first-claude-skill.cite.json" rel="related" type="application/json" title="Citation manifest"/>
    <id>https://adsforge.store/12-build-first-claude-skill/</id>
    <updated>2026-04-22T00:00:00.000Z</updated>
    <published>2026-04-22T00:00:00.000Z</published>
    <author>
      <name>The Forge</name>
    </author>
    <summary type="text">A Claude Skill is a markdown file with front-matter that lives in your Skills folder. Claude reads it when triggered and follows the instructions. The minimum: a name, a description, a body of instructions. 30 minutes from zero to a working Skill that wraps a prompt template you reuse. Working example below covers the canonical &apos;commit message generator&apos; Skill.</summary>
    <category term="claude"/>
    <category term="claude-desktop"/>
    <category term="developer-tools"/>
    <category term="beginner"/>
  </entry>
  <entry>
    <title type="text">AI agents for sales development reps: what works, what doesn&apos;t.</title>
    <link href="https://adsforge.store/13-ai-agents-for-sdrs/" rel="alternate" type="text/html"/>
    <link href="https://adsforge.store/13-ai-agents-for-sdrs.cite.json" rel="related" type="application/json" title="Citation manifest"/>
    <id>https://adsforge.store/13-ai-agents-for-sdrs/</id>
    <updated>2026-04-21T00:00:00.000Z</updated>
    <published>2026-04-21T00:00:00.000Z</published>
    <author>
      <name>The Forge</name>
    </author>
    <summary type="text">AI agents save SDRs roughly 3 hours a day on research, email drafting, and CRM data entry — but only when scoped narrowly. Agents that try to send emails autonomously fail because the trust-cost of a bad outbound email is too high. The pattern that wins: agent drafts, human approves, CRM logs. Tools that work: Apollo + Claude API + a CRM API. Tools that don&apos;t: &apos;AI SDR&apos; SaaS pretending to be human.</summary>
    <category term="agents"/>
    <category term="automation"/>
    <category term="claude"/>
    <category term="customer-support"/>
  </entry>
</feed>