{
    "version": "https://jsonfeed.org/version/1",
    "title": "DDD.WTF",
    "description": "",
    "home_page_url": "https://ddd.wtf",
    "feed_url": "https://ddd.wtf/feed.json",
    "user_comment": "",
    "author": {
        "name": "Oleksii Ivanov"
    },
    "items": [
        {
            "id": "https://ddd.wtf/post/how-to-build-great-skills-for-claude-code/",
            "url": "https://ddd.wtf/post/how-to-build-great-skills-for-claude-code/",
            "title": "How to Build Great Skills for Claude Code",
            "summary": "Most Claude Code skills fail silently — they either never trigger, produce generic output, or break the moment something unexpected happens. Here’s how to build&hellip;",
            "content_html": "<hr>\n<p>Most Claude Code skills fail silently — they either never trigger, produce generic output, or break the moment something unexpected happens. Here’s how to build ones that actually work.</p><hr>\n<h2 id=\"what-is-a-skill\">What is a Skill?</h2>\n<p>A skill is a set of instructions that teaches Claude Code how to handle a specific type of task. Think of it as a recipe — it tells Claude what steps to follow, what to look for in the codebase, and how to produce consistent, high-quality output.</p><p>A skill lives as a <code>SKILL.md</code> file in your project (usually under <code>.claude/skills/</code>), and Claude reads it whenever it detects a relevant request. The file contains markdown instructions, and optionally references additional files for detailed guidance.</p><p>This article shares practical patterns for building skills that trigger reliably, research code thoroughly, handle failures gracefully, and produce output developers actually want. The examples come from real skills we built and open-sourced at <a href=\"https://github.com/axitdev/skills\"  class=\"extlink extlink-icon-1\"  >github.com/axitdev/skills</a> — feel free to reference them, but the patterns here apply to any skill you’re building.</p><hr>\n<h2 id=\"the-architecture-that-works\">The Architecture That Works</h2>\n<p>After iterating on several skills, we found a consistent three-layer structure works best:</p><pre><code>skill-name/\n├── SKILL.md                    (under 500 lines — main instructions)\n├── references/\n│   └── detailed-reference.md   (loaded on demand — syntax, patterns, etc.)\n└── .skill-config.yaml          (optional — user-customizable settings)\n</code></pre>\n<h3 id=\"why-this-structure-matters-for-quality\">Why this structure matters for quality</h3>\n<p>The key insight is <strong>progressive disclosure</strong>. Claude loads skill metadata (name + description) for every message to decide whether to activate a skill. The SKILL.md body loads when the skill triggers. Reference files load only when needed during execution.</p><p>This isn’t just about organization — it directly affects output quality. Every token in a bloated SKILL.md competes with the actual code research and generation for space in Claude’s context window. A 700-line SKILL.md stuffed with syntax examples means less room for Claude to think about your actual codebase. Extract reference material, and Claude has more headroom for the work that matters.</p><h3 id=\"what-goes-where\">What goes where</h3>\n<p><strong>SKILL.md (under 500 lines)</strong> — the brain of the skill:</p><ul>\n<li>YAML frontmatter with <code>name</code> and <code>description</code> (the trigger mechanism)</li>\n<li>References section pointing to additional files</li>\n<li>Configuration section with defaults</li>\n<li>Step-by-step workflow</li>\n<li>Edge cases and failure handling</li>\n<li>Principles section</li>\n</ul>\n<p><strong>Reference files</strong> — the knowledge base, loaded on demand:</p><ul>\n<li>Syntax examples, cheat sheets, common patterns, prompt templates</li>\n<li>The SKILL.md tells Claude when to read them with an explicit instruction</li>\n</ul>\n<pre><code class=\"language-markdown\">## References\n\n- `references/syntax-guide.md` — Syntax examples for all supported formats.\n  **Read this before generating output.** Do not generate from memory.\n</code></pre>\n<p>The bold instruction is important — it prevents Claude from “winging it” with training data that might be outdated.</p><p><strong>Config files</strong> — user-customizable settings:</p><ul>\n<li>A <code>.yaml</code> file letting users tune behavior without editing SKILL.md</li>\n<li>Every field optional, sensible defaults for everything</li>\n<li><strong>Rule: if the config file doesn’t exist, use defaults and don’t ask the user to create one.</strong> The skill should work immediately without any setup.</li>\n</ul>\n<h3 id=\"before-and-after\">Before and after</h3>\n<p>Here’s what a skill looks like when you don’t follow this structure vs. when you do.</p><p><strong>Before (everything in SKILL.md, ~700 lines):</strong></p><pre><code class=\"language-markdown\">---\nname: my-diagrammer\ndescription: Creates diagrams.\n---\n\n# Diagrammer\n\n## Workflow\n1. Ask what to diagram\n2. Generate diagram\n\n## Mermaid Syntax Reference\n### Sequence diagrams\n(50 lines of syntax examples)\n### Class diagrams\n(40 lines of syntax examples)\n### ERD\n(40 lines of syntax examples)\n### State diagrams\n(30 lines)\n### Flowcharts\n(30 lines)\n... (8 more diagram types, 200+ more lines)\n\n## Common Patterns\n(100 lines of reusable patterns)\n</code></pre>\n<p>Problems: description is too vague to trigger reliably, syntax reference bloats the file, no failure handling, no config, no principles.</p><p><strong>After (structured, ~300 lines SKILL.md + reference files):</strong></p><pre><code class=\"language-markdown\">---\nname: my-diagrammer\ndescription: &gt;\n  Generate diagrams from codebases by researching actual project code. Use this\n  skill whenever the user asks to create, generate, or update diagrams. Trigger\n  on phrases like &quot;diagram the flow&quot;, &quot;visualize the schema&quot;, &quot;show me how X\n  works&quot;. Also trigger when the user says &quot;how does X connect to Y&quot; — this\n  implies a visual would help.\n---\n\n# Diagrammer\n\n## References\n- `references/syntax.md` — **Read before generating.** Don&#39;t use from memory.\n\n## Configuration\n(config table with defaults)\n\n## Workflow\n### Step 0: Load config\n### Step 1: Check for existing diagrams\n### Step 2: Ask what to diagram (skip if already specified)\n### Step 3: Research the code (the most important step)\n### Step 4: Generate\n### Step 5: Save and confirm\n\n## Handling Failures\n(what to do when code isn&#39;t found, MCP fails, etc.)\n\n## Principles\n1. Accuracy over aesthetics\n2. Research thoroughly\n3. Never lose work\n</code></pre>\n<p>The second version triggers more reliably, researches code before generating, handles failures, and keeps the heavy syntax reference out of the main file.</p><hr>\n<h2 id=\"writing-a-description-that-actually-triggers\">Writing a Description That Actually Triggers</h2>\n<p>The description is the most important part of your skill. It’s the only thing Claude sees when deciding whether to activate it. A bad description means the skill never triggers — or triggers for the wrong things.</p><p><strong>Be pushy.</strong> The skill-creator documentation explicitly recommends this because Claude tends to “under-trigger” — to not use skills when they’d be helpful.</p><p>A good description follows this pattern:</p><pre><code class=\"language-yaml\">description: &gt;\n  {What the skill does in one sentence}. Use this skill whenever {explicit \n  triggers}. Trigger on phrases like &quot;{keyword 1}&quot;, &quot;{keyword 2}&quot;, &quot;{keyword 3}&quot;.\n  Also trigger when the user {describes the intent without using the keywords} — \n  e.g. &quot;{natural language example 1}&quot;, &quot;{natural language example 2}&quot;.\n  Even if the user just {edge case trigger} — use this skill.\n</code></pre>\n<p>The four layers:</p><ol>\n<li><strong>What it is</strong> — one-sentence summary</li>\n<li><strong>Explicit triggers</strong> — direct keywords and phrases</li>\n<li><strong>Implicit triggers</strong> — descriptions of intent without specific keywords</li>\n<li><strong>Catch-all</strong> — edge cases that should still trigger the skill</li>\n</ol>\n<p><strong>Don’t be shy about listing trigger phrases.</strong> Better to over-trigger and handle gracefully than to never trigger at all.</p><hr>\n<h2 id=\"the-interactive-workflow-pattern\">The Interactive Workflow Pattern</h2>\n<p>Most good skills follow the same general workflow:</p><ol>\n<li><strong>Load configuration</strong> — read config, set defaults</li>\n<li><strong>Check existing work</strong> — don’t duplicate what’s already done</li>\n<li><strong>Understand the request</strong> — ask at most one clarifying question</li>\n<li><strong>Research the code</strong> — the most important step</li>\n<li><strong>Produce the output</strong> — using real names from the code</li>\n<li><strong>Confirm and offer follow-ups</strong> — let the user iterate</li>\n</ol>\n<p>The golden rule: <strong>ask the user at every decision point, but don’t over-ask.</strong> If the user already specified what they want in their initial message, skip the questions.</p><h3 id=\"check-existing-work-first\">Check Existing Work First</h3>\n<p>Before creating anything, check if it already exists. If your skill produces files, maintain an index and search it before generating:</p><pre><code class=\"language-markdown\">&gt; I found an existing {thing} that might be what you&#39;re looking for:\n&gt; - [{Name}](./path/to/file.md) — short description\n&gt;\n&gt; Would you like to:\n&gt; 1. View this existing one\n&gt; 2. Update/regenerate it\n&gt; 3. Create a new, separate one\n</code></pre>\n<p>This prevents the frustrating experience of Claude creating a second version of something that already exists. One of our skills initially lacked this check, and Claude would happily create duplicate output for the same request.</p><h3 id=\"research-the-code-thoroughly\">Research the Code Thoroughly</h3>\n<p>This is where code-aware skills add their real value. If your skill operates on a codebase, tell Claude to research deeply:</p><ol>\n<li><strong>Start broad</strong> — look at the project structure, find relevant directories</li>\n<li><strong>Go deep</strong> — read actual files, follow call chains, note method signatures and relationships</li>\n<li><strong>Surface hidden behavior</strong> — middleware, event listeners, observers, cron jobs</li>\n<li><strong>Ask if unclear</strong> — ambiguous code? Ask the user rather than guessing</li>\n</ol>\n<p>The key instruction that makes this work:</p><pre><code class=\"language-markdown\">This is the most important step. Thoroughly research the actual codebase \nbefore producing output. Spend more time reading code than writing output.\n</code></pre>\n<p>Without this emphasis, Claude tends to rush to generation based on partial understanding.</p><h3 id=\"adapt-to-the-environment\">Adapt to the environment</h3>\n<p>If your skill targets a specific language or ecosystem, consider adding <strong>environment detection</strong> so it adapts automatically. For example, a PHP skill might detect which framework is in use and adjust its research paths accordingly — looking for controllers in <code>app/Http/Controllers/</code> for one framework but <code>src/Controller/</code> for another. A Python skill might detect Django vs. Flask vs. FastAPI.</p><p>The pattern is: auto-detect by default, let the user override via config for edge cases.</p><hr>\n<h2 id=\"handle-failures-gracefully\">Handle Failures Gracefully</h2>\n<p>Things will go wrong — code doesn’t match expectations, MCP servers disconnect, searches find nothing. Plan for every failure mode explicitly.</p><h3 id=\"when-the-code-doesnt-exist\">When the code doesn’t exist</h3>\n<p>If your skill researches code and the requested feature/flow isn’t there, don’t invent or guess:</p><pre><code class=\"language-markdown\">&gt; I searched the codebase but couldn&#39;t find an implementation for &quot;{request}&quot;.\n&gt; Here&#39;s what I found related to this area: {list what you did find}.\n&gt;\n&gt; Would you like me to:\n&gt; 1. Look again — point me to the right files\n&gt; 2. Create a draft showing a proposed design (marked as DRAFT)\n&gt; 3. Try something else instead\n</code></pre>\n<p>The first time we didn’t handle this, Claude invented a plausible-looking but completely fictional flow diagram. Adding explicit “if nothing found” instructions fixed it completely.</p><h3 id=\"when-external-services-fail\">When external services fail</h3>\n<p>If your skill depends on an MCP server or external tool, always have a fallback:</p><pre><code class=\"language-markdown\">If the external service fails mid-creation (after research is done), don&#39;t \nlose the work. Immediately save the output in a local format. Then offer \nto retry the external service.\n</code></pre>\n<p><strong>Principle: never lose completed research.</strong> The user’s time waiting for code analysis is valuable. If the final rendering step fails, save what you have.</p><h3 id=\"when-mcp-tools-change\">When MCP tools change</h3>\n<p>For any skill that depends on MCP servers, don’t hardcode tool names:</p><pre><code class=\"language-markdown\">Before doing anything, list the available MCP tools to discover what the \nserver provides. Tool names, parameters, and capabilities change between \nversions. **Never assume a fixed API.** Always discover first.\n</code></pre>\n<p>We learned this the hard way — an MCP skill that assumed a specific tool name broke when the server updated. Making it discovery-first (introspect tools → adapt) made it resilient to any API change.</p><hr>\n<h2 id=\"index-files-track-what-exists\">Index Files: Track What Exists</h2>\n<p>Every skill that produces files should maintain an index — a markdown file listing everything it has created:</p><pre><code class=\"language-markdown\"># Generated Output\n\n&gt; Index. Last updated: 2025-12-15\n\n### {Category}\n- [{Title}](./{category}/{name}.md) — {short description}\n</code></pre>\n<p><strong>Build the index dynamically</strong> — only include section headers for categories that actually have entries. Our first version had a static template listing all possible categories, most of them empty. Much better to generate sections on the fly.</p><p><strong>Rules every index should follow:</strong></p><ul>\n<li>Only add, never remove (unless the user explicitly asks)</li>\n<li>Keep entries sorted alphabetically within each section</li>\n<li>Use relative paths</li>\n<li>Mark drafts: <code>[DRAFT] {Title}</code></li>\n<li>Update the <code>Last updated</code> date</li>\n</ul>\n<hr>\n<h2 id=\"config-design-principles\">Config Design Principles</h2>\n<p>After building several config files, these patterns emerged:</p><ol>\n<li><strong>Every field is optional.</strong> The skill must work with zero configuration.</li>\n<li><strong>Defaults are sensible.</strong> Cover the 80% case out of the box.</li>\n<li><strong>Comments as documentation.</strong> The config file itself teaches the user what’s available:</li>\n</ol>\n<pre><code class=\"language-yaml\"># Detail level:\n#   overview  — high-level, no method names\n#   standard  — class/method names, happy + key error paths\n#   detailed  — every call, all error paths, middleware, events\ndetail: standard\n</code></pre>\n<ol start=\"4\">\n<li><strong>Don’t duplicate the SKILL.md.</strong> The config table in SKILL.md is a quick reference. The <code>.yaml</code> file has the full documentation.</li>\n<li><strong>No config file prompt.</strong> If it doesn’t exist, use defaults silently. Never tell the user “you should create a config file.”</li>\n</ol>\n<hr>\n<h2 id=\"principles-section-the-skills-values\">Principles Section: The Skill’s Values</h2>\n<p>Every skill should end with a principles section — these are the tiebreakers when instructions are ambiguous. Order them by priority.</p><p>The best principles we’ve found across multiple skills:</p><ul>\n<li><strong>Accuracy over aesthetics</strong> — correct but plain beats pretty but wrong</li>\n<li><strong>Ask, don’t assume</strong> — when in doubt, ask the user</li>\n<li><strong>Research thoroughly</strong> — more time reading code than producing output</li>\n<li><strong>Never lose work</strong> — if something fails, save what you have</li>\n<li><strong>Discover, don’t assume</strong> — for MCP skills, introspect tools before using them</li>\n<li><strong>Real names always</strong> — use actual class/method/table names from the code</li>\n<li><strong>Graceful degradation</strong> — always have a fallback path</li>\n</ul>\n<p>These aren’t just nice-to-haves. Each principle exists because we hit a real problem that it solves. “Never lose work” came from an MCP failure that discarded 5 minutes of code analysis. “Real names always” came from Claude generating output with generic labels instead of actual class names.</p><hr>\n<h2 id=\"how-to-test-a-skill\">How to Test a Skill</h2>\n<p>Don’t just read your skill and imagine how it would work. Run it. The gap between “looks correct when read” and “works correctly when used” is always wider than you expect.</p><p><strong>Write 3-5 realistic test prompts</strong> — the kind of thing a real user would actually say. Not “test the diagram feature” but “diagram the payment flow in this project” or “explain how user authentication works.” Run each prompt with the skill installed and evaluate:</p><ol>\n<li><strong>Did the skill trigger?</strong> If not, your description needs more trigger phrases.</li>\n<li><strong>Did it research before producing?</strong> If it jumped straight to output, your research step isn’t emphatic enough.</li>\n<li><strong>Are the names real?</strong> Check that it uses actual class/method names from your code, not generic placeholders.</li>\n<li><strong>Did it handle edge cases?</strong> Try a prompt for something that doesn’t exist in the codebase. Does it invent, or does it tell you it found nothing?</li>\n<li><strong>Is the output the right depth?</strong> If you asked for an overview and got 2000 words, or asked for deep dive and got 3 paragraphs, the depth instructions need tuning.</li>\n</ol>\n<hr>\n<h2 id=\"common-mistakes-so-you-dont-have-to\">Common Mistakes (So You Don’t Have To)</h2>\n<p><strong>1. Putting reference content in SKILL.md.</strong> Syntax examples, cheat sheets, and pattern libraries belong in <code>references/</code> files. Your SKILL.md should be the workflow, not the encyclopedia. This also wastes context window — Claude has less room to think about your code when the SKILL.md is bloated.</p><p><strong>2. Static index templates.</strong> Listing all possible sections in the index template creates bloat. Build it dynamically — only sections with actual entries.</p><p><strong>3. Assuming one framework/environment.</strong> If your skill targets a language with multiple frameworks, detect the environment and provide parallel paths. Let the user override via config.</p><p><strong>4. No failure handling for “nothing found.”</strong> Without explicit instructions, Claude will invent plausible-looking but fictional output. Always handle the “I found nothing” case.</p><p><strong>5. Hardcoded MCP tool names.</strong> MCP servers evolve. Tools get renamed, parameters change. Discover tools at runtime, don’t assume they’ll stay the same.</p><p><strong>6. No duplicate checking.</strong> Without checking existing work first, Claude happily creates a second version of something that already exists. Always search before creating.</p><p><strong>7. Over-asking questions.</strong> If the user already specified what they want in their initial message, don’t ask them again. Parse their intent from what they said and start working. One clarifying question max.</p><hr>\n<h2 id=\"checklist-before-you-ship-a-skill\">Checklist: Before You Ship a Skill</h2>\n<ul>\n<li><input disabled=\"\" type=\"checkbox\"> SKILL.md is under 500 lines</li>\n<li><input disabled=\"\" type=\"checkbox\"> Description is pushy with explicit trigger phrases</li>\n<li><input disabled=\"\" type=\"checkbox\"> Heavy reference content is extracted to <code>references/</code> files</li>\n<li><input disabled=\"\" type=\"checkbox\"> Config has sensible defaults, every field is optional</li>\n<li><input disabled=\"\" type=\"checkbox\"> Workflow checks for existing work before creating</li>\n<li><input disabled=\"\" type=\"checkbox\"> “Nothing found” is handled explicitly</li>\n<li><input disabled=\"\" type=\"checkbox\"> External service failures have fallback paths</li>\n<li><input disabled=\"\" type=\"checkbox\"> Index (if applicable) is built dynamically, not from a static template</li>\n<li><input disabled=\"\" type=\"checkbox\"> Principles section captures the skill’s values</li>\n<li><input disabled=\"\" type=\"checkbox\"> Tested with 3-5 real prompts, not just reviewed as text</li>\n</ul>\n<hr>\n<p><em>The patterns in this post come from building and iterating on real skills. You can see full working examples at <a href=\"https://github.com/axitdev/skills\"  class=\"extlink extlink-icon-1\"  >github.com/axitdev/skills</a> — use them as reference or as a starting point for your own.</em></p>",
            "author": {
                "name": "Oleksii Ivanov"
            },
            "tags": [
                   "Tutorial",
                   "Prompt Engineering",
                   "Coding assistants",
                   "Claude Code",
                   "AI Skills",
                   "AI"
            ],
            "date_published": "2026-04-04T02:00:00+02:00",
            "date_modified": "2026-04-04T02:05:38+02:00"
        },
        {
            "id": "https://ddd.wtf/post/the-hidden-power-of-php-generators-for-large-datasets/",
            "url": "https://ddd.wtf/post/the-hidden-power-of-php-generators-for-large-datasets/",
            "title": "The Hidden Power of PHP Generators for Large Datasets",
            "summary": "TL;DR: PHP generators let you process million-row CSVs, paginated APIs, and massive database exports without running out of memory. This article goes beyond the yield&hellip;",
            "content_html": "<hr>\n<p><strong>TL;DR:</strong> PHP generators let you process million-row CSVs, paginated APIs, and massive database exports without running out of memory. This article goes beyond the <code>yield</code> basics — covering generator pipelines, <code>yield from</code> delegation, bidirectional communication, and real memory comparisons that show why generators should be your default tool for any dataset that doesn’t fit comfortably in RAM.</p><hr>\n<h2 id=\"you-already-know-yield-heres-why-youre-not-using-it-enough\">You Already Know <code>yield</code>. Here’s Why You’re Not Using It Enough.</h2>\n<p>Most PHP developers learn generators from a tutorial that shows <code>yield</code> inside a <code>range()</code> replacement and think “cool, but I’ll never need that.” Then they write a queue worker that loads 200K rows into an array, blows through <code>memory_limit</code>, and spend an afternoon debugging it.</p><p>Generators are not a niche feature. They’re the right default for any data pipeline that processes more than a few hundred items. The problem is that most content stops at the basics. Let’s go further.</p><h2 id=\"quick-recap-what-generators-do\">Quick Recap: What Generators Do</h2>\n<p>A generator function uses <code>yield</code> instead of <code>return</code>. Instead of building a complete array and returning it, it produces values one at a time, only when asked:</p><pre><code class=\"language-php\">function naturalNumbers(): Generator\n{\n    $i = 1;\n    while (true) {\n        yield $i++;\n    }\n}\n\n// This doesn&#39;t consume infinite memory — it produces one number at a time\n$numbers = naturalNumbers();\necho $numbers-&gt;current(); // 1\n$numbers-&gt;next();\necho $numbers-&gt;current(); // 2\n</code></pre>\n<p>The key insight: <strong>between yields, the generator is suspended.</strong> It doesn’t use CPU cycles. It holds only its local variables in memory — not the entire dataset.</p><p>In practice, you almost never call <code>-&gt;current()</code> and <code>-&gt;next()</code> directly. You iterate with <code>foreach</code>:</p><pre><code class=\"language-php\">foreach (naturalNumbers() as $n) {\n    if ($n &gt; 1000000) break;\n    // Process each number without storing the full sequence\n}\n</code></pre>\n<h2 id=\"real-world-pattern-1-streaming-csv-processing\">Real-World Pattern #1: Streaming CSV Processing</h2>\n<p>The most common use case I hit in production. Clients upload CSV exports ranging from 10K to 5M rows. Processing them with <code>file()</code> or <code>fgetcsv()</code> in a loop that collects into an array is a ticking time bomb.</p><pre><code class=\"language-php\">/**\n * Read a CSV file row by row, yielding associative arrays.\n * Memory usage is constant regardless of file size.\n */\nfunction readCsv(string $path, string $separator = &#39;,&#39;): Generator\n{\n    $handle = fopen($path, &#39;r&#39;);\n\n    if ($handle === false) {\n        throw new \\RuntimeException(&quot;Cannot open file: {$path}&quot;);\n    }\n\n    try {\n        // First row is the header\n        $headers = fgetcsv($handle, 0, $separator);\n\n        if ($headers === false) {\n            return; // Empty file\n        }\n\n        // Trim BOM and whitespace from headers\n        $headers = array_map(fn ($h) =&gt; trim($h, &quot;\\xEF\\xBB\\xBF \\t\\n\\r&quot;), $headers);\n        $columnCount = count($headers);\n\n        $lineNumber = 1;\n        while (($row = fgetcsv($handle, 0, $separator)) !== false) {\n            $lineNumber++;\n\n            // Skip rows with wrong column count (malformed data)\n            if (count($row) !== $columnCount) {\n                // Log and skip rather than crash\n                error_log(&quot;CSV line {$lineNumber}: expected {$columnCount} columns, got &quot; . count($row));\n                continue;\n            }\n\n            yield $lineNumber =&gt; array_combine($headers, $row);\n        }\n    } finally {\n        fclose($handle);\n    }\n}\n</code></pre>\n<p>Usage is dead simple:</p><pre><code class=\"language-php\">foreach (readCsv(&#39;/imports/customers-2026.csv&#39;) as $lineNum =&gt; $row) {\n    $this-&gt;upsertCustomer(\n        email: $row[&#39;email&#39;],\n        name: $row[&#39;full_name&#39;],\n        region: $row[&#39;region&#39;],\n    );\n}\n</code></pre>\n<p>Let’s put numbers on this:</p><table>\n<thead>\n<tr>\n<th>File size</th>\n<th>Rows</th>\n<th><code>file()</code> + array</th>\n<th>Generator</th>\n</tr>\n</thead>\n<tbody><tr>\n<td>5 MB</td>\n<td>10K</td>\n<td>18 MB</td>\n<td>2 MB</td>\n</tr>\n<tr>\n<td>50 MB</td>\n<td>100K</td>\n<td>165 MB</td>\n<td>2 MB</td>\n</tr>\n<tr>\n<td>500 MB</td>\n<td>1M</td>\n<td>OOM (&gt;512 MB)</td>\n<td>2 MB</td>\n</tr>\n<tr>\n<td>2 GB</td>\n<td>5M</td>\n<td>OOM</td>\n<td>2 MB</td>\n</tr>\n</tbody></table>\n<p>The generator column is always ~2 MB because it only holds one row plus the headers in memory at any time. The file handle is buffered by the OS. You could process a 50 GB file on a server with 128 MB <code>memory_limit</code> and it would work fine.</p><h2 id=\"real-world-pattern-2-paginated-api-consumption\">Real-World Pattern #2: Paginated API Consumption</h2>\n<p>External APIs paginate their responses. The naive approach loads all pages into an array before processing:</p><pre><code class=\"language-php\">// ❌ Accumulates all pages in memory\nfunction getAllProducts(ApiClient $api): array\n{\n    $all = [];\n    $page = 1;\n\n    do {\n        $response = $api-&gt;get(&#39;/products&#39;, [&#39;page&#39; =&gt; $page, &#39;per_page&#39; =&gt; 100]);\n        $all = array_merge($all, $response[&#39;data&#39;]);\n        $page++;\n    } while ($response[&#39;has_more&#39;]);\n\n    return $all; // Could be 50K+ items\n}\n</code></pre>\n<p>With a generator, each page is fetched on demand and each item is yielded individually:</p><pre><code class=\"language-php\">// ✅ Fetches and yields one page at a time\nfunction allProducts(ApiClient $api, int $perPage = 100): Generator\n{\n    $page = 1;\n\n    do {\n        $response = $api-&gt;get(&#39;/products&#39;, [\n            &#39;page&#39; =&gt; $page,\n            &#39;per_page&#39; =&gt; $perPage,\n        ]);\n\n        foreach ($response[&#39;data&#39;] as $product) {\n            yield $product;\n        }\n\n        $page++;\n    } while ($response[&#39;has_more&#39;]);\n}\n\n// Consumer doesn&#39;t know or care about pagination\nforeach (allProducts($api) as $product) {\n    $this-&gt;syncProduct($product);\n}\n</code></pre>\n<p>This pattern is even more powerful when the API uses cursor-based pagination:</p><pre><code class=\"language-php\">function allOrders(ApiClient $api): Generator\n{\n    $cursor = null;\n\n    do {\n        $params = [&#39;limit&#39; =&gt; 100];\n        if ($cursor !== null) {\n            $params[&#39;after&#39;] = $cursor;\n        }\n\n        $response = $api-&gt;get(&#39;/orders&#39;, $params);\n\n        foreach ($response[&#39;data&#39;] as $order) {\n            yield $order;\n        }\n\n        $cursor = $response[&#39;next_cursor&#39;];\n    } while ($cursor !== null);\n}\n</code></pre>\n<p>The consumer sees a flat stream of orders. The pagination complexity is entirely encapsulated in the generator.</p><h2 id=\"real-world-pattern-3-database-result-streaming\">Real-World Pattern #3: Database Result Streaming</h2>\n<p>PDO can fetch rows one at a time, but most code calls <code>fetchAll()</code> out of habit. For large result sets, use a generator:</p><pre><code class=\"language-php\">function queryStream(PDO $pdo, string $sql, array $params = []): Generator\n{\n    $stmt = $pdo-&gt;prepare($sql);\n    $stmt-&gt;execute($params);\n\n    while ($row = $stmt-&gt;fetch(PDO::FETCH_ASSOC)) {\n        yield $row;\n    }\n\n    $stmt-&gt;closeCursor();\n}\n\n// Process 500K orders without loading them all\nforeach (queryStream($pdo, &#39;SELECT * FROM orders WHERE year = ?&#39;, [2025]) as $order) {\n    $this-&gt;archiveOrder($order);\n}\n</code></pre>\n<p><strong>Important MySQL note:</strong> By default, PHP’s MySQL driver buffers the entire result set in memory on the client side (even with <code>fetch()</code>). To truly stream results, you need unbuffered queries:</p><pre><code class=\"language-php\">function mysqlStream(PDO $pdo, string $sql, array $params = []): Generator\n{\n    // Switch to unbuffered mode for this query\n    $pdo-&gt;setAttribute(PDO::MYSQL_ATTR_USE_BUFFERED_QUERY, false);\n\n    try {\n        $stmt = $pdo-&gt;prepare($sql);\n        $stmt-&gt;execute($params);\n\n        while ($row = $stmt-&gt;fetch(PDO::FETCH_ASSOC)) {\n            yield $row;\n        }\n\n        $stmt-&gt;closeCursor();\n    } finally {\n        // Restore buffered mode for subsequent queries\n        $pdo-&gt;setAttribute(PDO::MYSQL_ATTR_USE_BUFFERED_QUERY, true);\n    }\n}\n</code></pre>\n<p>With unbuffered queries, the memory profile drops from “entire result set” to “one row” — the difference between OOM and success for large exports.</p><p><strong>One caveat:</strong> with unbuffered queries, you cannot run other queries on the same PDO connection until the cursor is fully consumed or <code>closeCursor()</code> is called. If your pipeline needs to do lookups mid-stream — like the <code>enrichWithCustomer</code> stage below — use a <strong>separate PDO connection</strong> for those secondary queries.</p><h2 id=\"generator-pipelines-composable-data-transformations\">Generator Pipelines: Composable Data Transformations</h2>\n<p>This is where generators go from “useful” to “architectural pattern.” You can chain generators to build data pipelines where each stage transforms or filters the stream:</p><pre><code class=\"language-php\">// Stage 1: Read raw data\nfunction readOrders(PDO $pdo): Generator\n{\n    yield from queryStream($pdo, &#39;SELECT * FROM orders WHERE status = ?&#39;, [&#39;completed&#39;]);\n}\n\n// Stage 2: Enrich with customer data\nfunction enrichWithCustomer(Generator $orders, PDO $pdo): Generator\n{\n    // Batch customer lookups for efficiency\n    $customerCache = [];\n\n    foreach ($orders as $order) {\n        $customerId = $order[&#39;customer_id&#39;];\n\n        if (!isset($customerCache[$customerId])) {\n            $stmt = $pdo-&gt;prepare(&#39;SELECT name, email, tier FROM customers WHERE id = ?&#39;);\n            $stmt-&gt;execute([$customerId]);\n            $customerCache[$customerId] = $stmt-&gt;fetch(PDO::FETCH_ASSOC);\n\n            // Keep cache bounded — evict old entries\n            if (count($customerCache) &gt; 1000) {\n                $customerCache = array_slice($customerCache, -500, null, true);\n            }\n        }\n\n        $order[&#39;customer&#39;] = $customerCache[$customerId];\n        yield $order;\n    }\n}\n\n// Stage 3: Filter high-value orders\nfunction filterHighValue(Generator $orders, float $threshold = 500.0): Generator\n{\n    foreach ($orders as $order) {\n        if ((float) $order[&#39;total&#39;] &gt;= $threshold) {\n            yield $order;\n        }\n    }\n}\n\n// Stage 4: Format for export\nfunction formatForExport(Generator $orders): Generator\n{\n    foreach ($orders as $order) {\n        yield [\n            &#39;order_id&#39; =&gt; $order[&#39;id&#39;],\n            &#39;date&#39; =&gt; date(&#39;Y-m-d&#39;, strtotime($order[&#39;created_at&#39;])),\n            &#39;customer_name&#39; =&gt; $order[&#39;customer&#39;][&#39;name&#39;],\n            &#39;customer_tier&#39; =&gt; $order[&#39;customer&#39;][&#39;tier&#39;],\n            &#39;total&#39; =&gt; number_format((float) $order[&#39;total&#39;], 2),\n        ];\n    }\n}\n\n// Compose the pipeline\n$pipeline = formatForExport(\n    filterHighValue(\n        enrichWithCustomer(\n            readOrders($pdo),\n            $pdo,\n        ),\n        threshold: 1000.0,\n    )\n);\n\n// Write to CSV — the entire pipeline processes one row at a time\n$out = fopen(&#39;high-value-orders.csv&#39;, &#39;w&#39;);\n$headerWritten = false;\n\nforeach ($pipeline as $row) {\n    if (!$headerWritten) {\n        fputcsv($out, array_keys($row));\n        $headerWritten = true;\n    }\n    fputcsv($out, $row);\n}\n\nfclose($out);\n</code></pre>\n<p>This pipeline reads from the database, enriches with customer data (with a bounded cache), filters, formats, and writes to CSV — all in constant memory. A 500K-row export uses the same ~5 MB of RAM as a 5K-row export.</p><h3 id=\"making-pipelines-cleaner-with-a-builder\">Making Pipelines Cleaner with a Builder</h3>\n<p>The nested function calls above compose right-to-left, which can be hard to follow. A thin wrapper flips the composition to read left-to-right:</p><pre><code class=\"language-php\">final class Pipeline\n{\n    private Generator $source;\n\n    public function __construct(Generator $source)\n    {\n        $this-&gt;source = $source;\n    }\n\n    public function pipe(callable $stage): self\n    {\n        $this-&gt;source = $stage($this-&gt;source);\n        return $this;\n    }\n\n    public function filter(callable $predicate): self\n    {\n        return $this-&gt;pipe(function (Generator $input) use ($predicate): Generator {\n            foreach ($input as $key =&gt; $item) {\n                if ($predicate($item)) {\n                    yield $key =&gt; $item;\n                }\n            }\n        });\n    }\n\n    public function map(callable $transform): self\n    {\n        return $this-&gt;pipe(function (Generator $input) use ($transform): Generator {\n            foreach ($input as $key =&gt; $item) {\n                yield $key =&gt; $transform($item);\n            }\n        });\n    }\n\n    public function each(callable $callback): void\n    {\n        foreach ($this-&gt;source as $item) {\n            $callback($item);\n        }\n    }\n\n    public function toArray(): array\n    {\n        return iterator_to_array($this-&gt;source);\n    }\n\n    public function reduce(callable $callback, mixed $initial = null): mixed\n    {\n        $carry = $initial;\n        foreach ($this-&gt;source as $item) {\n            $carry = $callback($carry, $item);\n        }\n        return $carry;\n    }\n}\n\n// Now the same pipeline reads left to right:\n(new Pipeline(readOrders($pdo)))\n    -&gt;pipe(fn ($g) =&gt; enrichWithCustomer($g, $pdo))\n    -&gt;filter(fn ($order) =&gt; (float) $order[&#39;total&#39;] &gt;= 1000.0)\n    -&gt;map(fn ($order) =&gt; [\n        &#39;order_id&#39; =&gt; $order[&#39;id&#39;],\n        &#39;customer&#39; =&gt; $order[&#39;customer&#39;][&#39;name&#39;],\n        &#39;total&#39; =&gt; number_format((float) $order[&#39;total&#39;], 2),\n    ])\n    -&gt;each(fn ($row) =&gt; fputcsv($out, $row));\n</code></pre>\n<h2 id=\"yield-from-delegation-and-flattening\"><code>yield from</code>: Delegation and Flattening</h2>\n<p><code>yield from</code> delegates to another generator (or any iterable), flattening nested sequences:</p><pre><code class=\"language-php\">function allTransactions(PDO $pdo): Generator\n{\n    // Combine multiple sources into one stream\n    yield from queryStream($pdo, &#39;SELECT *, &quot;sale&quot; as type FROM sales&#39;);\n    yield from queryStream($pdo, &#39;SELECT *, &quot;refund&quot; as type FROM refunds&#39;);\n    yield from queryStream($pdo, &#39;SELECT *, &quot;chargeback&quot; as type FROM chargebacks&#39;);\n}\n\n// Consumer sees a single flat stream of transactions\nforeach (allTransactions($pdo) as $txn) {\n    $this-&gt;ledger-&gt;record($txn);\n}\n</code></pre>\n<p>This is powerful for combining data from multiple tables, files, or APIs into one unified stream without loading any of them fully into memory.</p><p><strong>A practical use case — multi-file import:</strong></p><pre><code class=\"language-php\">function importDirectory(string $dir): Generator\n{\n    $files = glob(&quot;{$dir}/*.csv&quot;);\n\n    foreach ($files as $file) {\n        echo &quot;Processing: {$file}\\n&quot;;\n        yield from readCsv($file);\n    }\n}\n\n// Process all CSVs in a directory as one stream\nforeach (importDirectory(&#39;/imports/2026-03&#39;) as $lineNum =&gt; $row) {\n    $this-&gt;importRow($row);\n}\n</code></pre>\n<h2 id=\"bidirectional-communication-send-and-backpressure\">Bidirectional Communication: <code>send()</code> and Backpressure</h2>\n<p>Generators can receive values via <code>send()</code>, which becomes the return value of the <code>yield</code> expression. This enables backpressure patterns — the consumer can signal the producer:</p><pre><code class=\"language-php\">function controllableProducer(PDO $pdo): Generator\n{\n    $offset = 0;\n    $batchSize = 1000;\n\n    while (true) {\n        $stmt = $pdo-&gt;prepare(&quot;SELECT * FROM events LIMIT ? OFFSET ?&quot;);\n        $stmt-&gt;execute([$batchSize, $offset]);\n        $rows = $stmt-&gt;fetchAll(PDO::FETCH_ASSOC);\n\n        if (empty($rows)) {\n            return; // No more data\n        }\n\n        foreach ($rows as $row) {\n            $signal = yield $row;\n\n            // Consumer can send signals back\n            if ($signal === &#39;skip_batch&#39;) {\n                break; // Skip rest of this batch\n            }\n            if ($signal === &#39;stop&#39;) {\n                return; // Stop entirely\n            }\n        }\n\n        $offset += $batchSize;\n    }\n}\n\n// Usage\n$producer = controllableProducer($pdo);\n\nforeach ($producer as $event) {\n    $result = $this-&gt;processEvent($event);\n\n    if ($result === &#39;rate_limited&#39;) {\n        // Tell the producer to stop — we&#39;ll resume later\n        $producer-&gt;send(&#39;stop&#39;);\n    }\n}\n</code></pre>\n<p>I’ll be honest: I use <code>send()</code> rarely. In most cases, you can achieve the same result with a simple <code>break</code> or by tracking state outside the generator. But for complex producer-consumer patterns where the producer needs to adapt its behavior, <code>send()</code> is the clean tool.</p><h2 id=\"performance-generators-vs-arrays\">Performance: Generators vs. Arrays</h2>\n<p>Let’s settle this with numbers. Processing 100K items through a filter + map + reduce:</p><table>\n<thead>\n<tr>\n<th>Approach</th>\n<th>Peak memory</th>\n<th>Time</th>\n<th>Notes</th>\n</tr>\n</thead>\n<tbody><tr>\n<td><code>array_filter</code> + <code>array_map</code> + <code>array_reduce</code></td>\n<td>82 MB</td>\n<td>95ms</td>\n<td>Creates 3 intermediate arrays</td>\n</tr>\n<tr>\n<td>Raw generator pipeline</td>\n<td>2 MB</td>\n<td>88ms</td>\n<td>Minimal overhead</td>\n</tr>\n</tbody></table>\n<p>The time difference is negligible. The memory difference is not. When you’re running 10 queue workers, each processing large datasets, the difference between 82 MB and 2 MB per worker is the difference between needing 820 MB and 20 MB of total memory. That’s real money on your hosting bill.</p><h2 id=\"common-mistakes\">Common Mistakes</h2>\n<h3 id=\"1-calling-iterator_to_array-too-early\">1. Calling <code>iterator_to_array()</code> Too Early</h3>\n<pre><code class=\"language-php\">// ❌ Defeats the entire purpose of the generator\n$allRows = iterator_to_array(readCsv(&#39;huge-file.csv&#39;));\n// You just loaded the entire file into an array\n\n// ✅ Consume lazily\nforeach (readCsv(&#39;huge-file.csv&#39;) as $row) {\n    process($row);\n}\n</code></pre>\n<h3 id=\"2-forgetting-that-generators-are-forward-only\">2. Forgetting That Generators Are Forward-Only</h3>\n<pre><code class=\"language-php\">$gen = readCsv(&#39;data.csv&#39;);\n\nforeach ($gen as $row) { /* first pass */ }\nforeach ($gen as $row) { /* nothing happens — generator is exhausted */ }\n\n// If you need multiple passes, either:\n// a) Create a new generator for each pass\n// b) Collect into an array (if it fits in memory)\n// c) Restructure to do everything in one pass\n</code></pre>\n<h3 id=\"3-not-handling-cleanup\">3. Not Handling Cleanup</h3>\n<p>If a consumer breaks out of a generator early, the <code>finally</code> block runs — use it for cleanup:</p><pre><code class=\"language-php\">function readFile(string $path): Generator\n{\n    $handle = fopen($path, &#39;r&#39;);\n\n    try {\n        while (($line = fgets($handle)) !== false) {\n            yield trim($line);\n        }\n    } finally {\n        // This runs even if the consumer breaks early\n        fclose($handle);\n    }\n}\n\n// The file handle is properly closed even though we break early\nforeach (readFile(&#39;log.txt&#39;) as $line) {\n    if (str_contains($line, &#39;FATAL&#39;)) {\n        $this-&gt;alert($line);\n        break; // finally block still runs, file is closed\n    }\n}\n</code></pre>\n<h3 id=\"4-returning-values-from-generators\">4. Returning Values from Generators</h3>\n<p>Generators can return a final value (accessible via <code>getReturn()</code>), but it’s rarely useful and often confusing:</p><pre><code class=\"language-php\">function countedRead(string $path): Generator\n{\n    $count = 0;\n    $handle = fopen($path, &#39;r&#39;);\n\n    while (($line = fgets($handle)) !== false) {\n        yield trim($line);\n        $count++;\n    }\n\n    fclose($handle);\n    return $count; // Accessible after generator completes\n}\n\n$gen = countedRead(&#39;data.txt&#39;);\nforeach ($gen as $line) {\n    process($line);\n}\necho &quot;Processed {$gen-&gt;getReturn()} lines&quot;; // Works, but a counter variable is simpler\n</code></pre>\n<p>My advice: avoid <code>return</code> in generators. Track metadata with a separate counter or wrapper object. It’s clearer.</p><h2 id=\"when-to-reach-for-a-generator\">When to Reach for a Generator</h2>\n<p>My personal heuristic is simple:</p><ul>\n<li><strong>Processing more than ~1,000 items?</strong> Use a generator.</li>\n<li><strong>Reading from a file, API, or database?</strong> Use a generator.</li>\n<li><strong>Building a pipeline with filter → map → reduce?</strong> Use a generator.</li>\n<li><strong>Combining multiple data sources?</strong> Use <code>yield from</code>.</li>\n<li><strong>Need the full array for <code>usort</code>, <code>array_unique</code>, or random access?</strong> Use an array — generators can’t do that.</li>\n</ul>\n<p>Generators are not a premature optimization. They’re a better default. An array is the special case — you use it when you need random access or the dataset is small enough that it doesn’t matter.</p><p>Start yielding.</p>",
            "author": {
                "name": "Oleksii Ivanov"
            },
            "tags": [
                   "Tutorial",
                   "Performance",
                   "PHP",
                   "Database"
            ],
            "date_published": "2026-03-28T10:00:00+01:00",
            "date_modified": "2026-03-28T11:01:40+01:00"
        },
        {
            "id": "https://ddd.wtf/post/the-state-of-php-in-2026-whats-changed-and-whats-coming/",
            "url": "https://ddd.wtf/post/the-state-of-php-in-2026-whats-changed-and-whats-coming/",
            "title": "The State of PHP in 2026: What&#x27;s Changed and What&#x27;s Coming",
            "summary": "TL;DR: PHP is thriving in 2026. The 8.x series has transformed the language with property hooks, generics-adjacent features, and performance improvements that rival compiled languages&hellip;",
            "content_html": "<hr>\n<p><strong>TL;DR:</strong> PHP is thriving in 2026. The 8.x series has transformed the language with property hooks, generics-adjacent features, and performance improvements that rival compiled languages for web workloads. The ecosystem is mature, hiring is stable, and the “PHP is dead” narrative looks more ridiculous than ever. Here’s your comprehensive roundup of where PHP stands and where it’s headed.</p><hr>\n<h2 id=\"the-numbers-dont-lie\">The Numbers Don’t Lie</h2>\n<p>Let’s start with the data that makes PHP haters uncomfortable.</p><p>As of early 2026, PHP powers roughly 75-77% of websites with a known server-side language. While that number has gradually declined from its peak of 80%+, the absolute number of PHP-powered sites continues to grow. WordPress alone accounts for over 40% of all websites, and it’s not going anywhere.</p><p>But raw market share tells only part of the story. The more interesting metrics are:</p><ul>\n<li><strong>Packagist</strong> surpassed 45 billion total installs, with monthly downloads consistently above 3 billion</li>\n<li><strong>PHP Foundation</strong> funding reached sustainable levels, with multiple full-time developers working on core</li>\n<li><strong>JIT compilation</strong> improvements in PHP 8.3 and 8.4 narrowed the gap with Node.js on CPU-bound workloads to single-digit percentage differences</li>\n<li><strong>Laravel</strong> crossed 100K GitHub stars and remains the fastest-growing backend framework across all languages on GitHub</li>\n</ul>\n<pre><code>Framework Ecosystem Health (GitHub activity, 2025)\n──────────────────────────────────────────────────\nLaravel         ████████████████████████████ 100K+ ★\nSymfony         ██████████████████           30K+ ★\nSlim            ███████████                  12K+ ★\nAPI Platform    ████████                     8K+ ★\n</code></pre>\n<h2 id=\"language-features-the-8x-renaissance\">Language Features: The 8.x Renaissance</h2>\n<p>PHP 8.0 through 8.4 represent the most significant evolution of the language since PHP 7.0’s performance overhaul. Let’s recap what landed and what it means for daily development.</p><h3 id=\"php-81--the-foundation\">PHP 8.1 — The Foundation</h3>\n<p>Enums and fibers laid groundwork that took years to fully appreciate. Readonly properties started the immutability push. Intersection types gave us composability.</p><h3 id=\"php-82--refinement\">PHP 8.2 — Refinement</h3>\n<p>Readonly classes, <code>true</code>/<code>false</code>/<code>null</code> as standalone types, and DNF (Disjunctive Normal Form) types. The deprecation of dynamic properties was controversial but correct — it pushed codebases toward explicit, analyzable structures.</p><h3 id=\"php-83--quality-of-life\">PHP 8.3 — Quality of Life</h3>\n<p>Typed class constants, <code>json_validate()</code>, the <code>#[Override]</code> attribute, and deep-cloning improvements. Each small on its own, together they made strict typing feel natural rather than burdensome.</p><h3 id=\"php-84--the-game-changer\">PHP 8.4 — The Game Changer</h3>\n<p>Property hooks are the headline feature, but the full picture is bigger:</p><pre><code class=\"language-php\">class Money\n{\n    public readonly string $formatted {\n        get =&gt; number_format($this-&gt;amount / 100, 2) . &#39; &#39; . $this-&gt;currency;\n    }\n\n    public function __construct(\n        private int $amount,\n        private string $currency = &#39;USD&#39;,\n    ) {}\n}\n\n$price = new Money(4999);\necho $price-&gt;formatted; // &quot;49.99 USD&quot;\n</code></pre>\n<p>Property hooks eliminate the getter/setter boilerplate that made PHP feel verbose compared to Kotlin or C#. Combined with asymmetric visibility (<code>public private(set)</code>), PHP’s object model is now genuinely expressive.</p><p>Other 8.4 highlights:</p><ul>\n<li><strong>Lazy objects</strong> via <code>ReflectionClass::newLazyProxy()</code> — massive DI container optimization potential</li>\n<li><strong><code>#[Deprecated]</code> attribute</strong> for userland code — finally, proper deprecation notices without docblock hacks</li>\n<li><strong><code>new</code> without parentheses</strong> — <code>new Foo-&gt;bar()</code> chains cleanly</li>\n<li><strong>HTML5 parser</strong> in DOM extension — goodbye, libxml2 HTML parsing nightmares</li>\n</ul>\n<h3 id=\"looking-ahead-php-85-and-beyond\">Looking Ahead: PHP 8.5 and Beyond</h3>\n<p>The RFCs in discussion paint an exciting picture:</p><ul>\n<li><strong>Pattern matching</strong> — the most requested feature, finally getting serious RFC attention</li>\n<li><strong>Generics</strong> — still the elephant in the room. The PHP Foundation has acknowledged that erased generics (type information available at static analysis time but not at runtime) are the pragmatic path forward</li>\n<li><strong>Pipe operator</strong> — <code>$result = $input |&gt; trim(...) |&gt; strtolower(...) |&gt; sanitize(...)</code> would make functional-style PHP beautiful</li>\n<li><strong>Property accessor asymmetric visibility refinements</strong> — building on 8.4’s foundation</li>\n</ul>\n<h2 id=\"framework-landscape\">Framework Landscape</h2>\n<h3 id=\"laravel-the-dominant-force\">Laravel: The Dominant Force</h3>\n<p>Laravel’s dominance is undeniable. With Reverb (WebSockets), Volt (single-file Livewire components), and continued investment in the first-party ecosystem (Forge, Vapor, Envoyer, Nova, Pulse), it’s become a full platform rather than just a framework.</p><p>The critique that Laravel is “too magical” has softened as the framework embraced explicit typing, better IDE support, and first-class static analysis integration. Laravel Pint, Herd, and the official PHPStan extension (<code>larastan</code>) show a commitment to developer tooling that’s hard to match.</p><h3 id=\"symfony-the-enterprise-backbone\">Symfony: The Enterprise Backbone</h3>\n<p>Symfony 7.x continued its tradition of reliability and flexibility. The Messenger component is arguably the best queue/message bus implementation in any PHP framework. Symfony UX with Turbo and Stimulus provides a pragmatic approach to frontend interactivity without the SPA complexity.</p><p>Where Symfony truly shines in 2026: API Platform. Building a standards-compliant, documented, and performant API has never been easier. If you’re building APIs that need to last 5+ years, Symfony’s stability guarantees are unmatched.</p><h3 id=\"the-rising-contenders\">The Rising Contenders</h3>\n<ul>\n<li><strong>FrankenPHP</strong> matured into a legitimate application server, providing a worker mode that keeps PHP in memory between requests. This changes the deployment model entirely — no more PHP-FPM process management headaches</li>\n<li><strong>Laravel Octane</strong> with Swoole/RoadRunner made long-running PHP processes accessible to the masses</li>\n<li><strong>Spiral Framework</strong> pushed the boundaries of what async PHP can do</li>\n<li><strong>Saloon</strong> became the de facto standard for building API integrations</li>\n</ul>\n<h2 id=\"tooling-renaissance\">Tooling Renaissance</h2>\n<p>The developer experience in PHP has improved dramatically:</p><h3 id=\"static-analysis\">Static Analysis</h3>\n<p>PHPStan and Psalm matured to the point where running without them feels reckless. PHPStan’s popularity exploded — most major open-source packages now ship with level 8+ CI checks. The combination of PHP’s gradual typing and PHPStan’s inference means you get TypeScript-like safety without a compile step.</p><h3 id=\"rector\">Rector</h3>\n<p>Automated refactoring became mainstream. Upgrading PHP versions, modernizing code patterns, and enforcing architectural rules programmatically — Rector made large-scale codebase modernization feasible for teams that couldn’t afford a manual rewrite.</p><h3 id=\"ide-support\">IDE Support</h3>\n<p>PhpStorm 2025.x releases brought AI-assisted refactoring, better generics support (even for PHPDoc-based generics), and first-class property hooks support. The VS Code + Intelephense combination closed the gap significantly, making PHP development accessible regardless of IDE budget.</p><h3 id=\"testing\">Testing</h3>\n<p>Pest continued to grow as a PHPUnit wrapper that developers actually enjoy writing tests in. The combination of Pest’s expressive syntax with Laravel’s testing utilities created what is arguably the best testing DX in any backend language:</p><pre><code class=\"language-php\">it(&#39;processes payments correctly&#39;, function () {\n    $order = Order::factory()-&gt;withItems(3)-&gt;create();\n\n    $result = $order-&gt;processPayment(\n        new FakePaymentGateway()\n    );\n\n    expect($result)\n        -&gt;toBeSuccessful()\n        -&gt;and($order-&gt;refresh())\n        -&gt;status-&gt;toBe(OrderStatus::Paid)\n        -&gt;paid_at-&gt;not-&gt;toBeNull();\n});\n</code></pre>\n<h2 id=\"community-health\">Community Health</h2>\n<p>The PHP community is in its most professional era:</p><ul>\n<li><strong>PHP Foundation</strong> provided stability that individual contributors couldn’t. Having paid core developers means features and bug fixes ship predictably</li>\n<li><strong>Conferences</strong> rebounded strongly — SymfonyCon, Laracon (US, EU, AU, India), PHPKonf, and regional meetups are thriving</li>\n<li><strong>Content creation</strong> expanded — more PHP-focused YouTube channels, newsletters, and podcasts than ever. The “PHP Renaissance” narrative gained mainstream developer media attention</li>\n<li><strong>Diversity</strong> improved, though more work remains. Mentorship programs and scholarship-funded conference tickets are becoming standard</li>\n</ul>\n<h2 id=\"the-honest-challenges\">The Honest Challenges</h2>\n<p>It’s not all sunshine. Here’s what still needs work:</p><h3 id=\"the-generics-question\">The Generics Question</h3>\n<p>Every year we say “generics might come soon,” and every year it doesn’t happen. PHPDoc generics via PHPStan work remarkably well as a stopgap, but the lack of runtime generics means PHP can’t express certain patterns cleanly. The erased generics RFC is the most promising approach, but consensus is slow.</p><h3 id=\"async-isnt-solved\">Async Isn’t Solved</h3>\n<p>Fibers provided the building blocks, but there’s no standard async library in the way JavaScript has native promises. RevoltPHP is the closest thing to a standard event loop, but the ecosystem is fragmented. Most PHP developers still use synchronous code with queue workers, and that’s fine for 95% of use cases.</p><h3 id=\"the-wordpress-association\">The WordPress Association</h3>\n<p>PHP’s identity is still heavily tied to WordPress in the public consciousness. While WordPress is a testament to PHP’s durability, it doesn’t represent modern PHP development. The gap between “WordPress PHP” and “Laravel/Symfony PHP” is wider than the gap between Java and Kotlin.</p><h3 id=\"hiring-pipeline\">Hiring Pipeline</h3>\n<p>Junior developers increasingly learn JavaScript or Python first. PHP’s onboarding pipeline relies heavily on WordPress and legacy projects, which gives newcomers a skewed view of the language. More modern PHP educational content is needed.</p><h2 id=\"whats-coming-in-2026-2027\">What’s Coming in 2026-2027</h2>\n<p>Based on RFC discussions and Foundation roadmap:</p><ol>\n<li><strong>PHP 8.5</strong> (November 2025) — shipped on schedule with the pipe operator (<code>|&gt;</code>), clone with syntax for readonly classes, a built-in URI extension replacing the aging <code>parse_url()</code>, <code>#[\\NoDiscard]</code> attribute for safer APIs, <code>array_first()</code> / <code>array_last()</code>, and fatal error backtraces. A solid, focused release.</li>\n<li><strong>PHP 8.6</strong> (November 2026) — Partial Function Application lands via <code>?</code> placeholder syntax, completing the functional programming trifecta with first-class callables and the pipe operator. Native <code>clamp()</code> joins the standard library. Pattern matching and true async (spawn/await/coroutines) are under active RFC discussion but not guaranteed.</li>\n<li><strong>Performance</strong> — the JIT compiler continues to improve. PHP’s performance story for web workloads is already strong, but CPU-bound tasks are getting closer to Go/Rust territory.</li>\n<li><strong>Native typing improvements</strong> — erased generics remain the holy grail, with active RFC work expected through 2026. PHPDoc generics via PHPStan remain the pragmatic stopgap.</li>\n<li><strong>FrankenPHP</strong> and worker-mode PHP will likely become the default deployment model for new projects within 2 years, fundamentally changing how we think about PHP application lifecycle — especially now that the PHP Foundation has formally started collaborating with the FrankenPHP project.</li>\n</ol>\n<h2 id=\"my-take\">My Take</h2>\n<p>After a decade of writing PHP professionally, I’ve never been more optimistic. The language has evolved faster than its reputation, the ecosystem is mature without being stagnant, and the developer experience rivals anything in the JavaScript or Python worlds.</p><p>Is PHP the best choice for every project? Of course not. But for web applications, APIs, and the vast majority of backend work, PHP in 2026 is a genuinely excellent choice. The developers who dismiss it are working from outdated information — or they never gave modern PHP a fair shot.</p><p>The boring truth is that PHP lets you ship fast, maintain easily, and hire affordably. In a world chasing the next shiny framework, that’s a superpower.</p><hr>\n",
            "author": {
                "name": "Oleksii Ivanov"
            },
            "tags": [
                   "PHP 8.x",
                   "PHP",
                   "Career"
            ],
            "date_published": "2026-03-22T10:00:00+01:00",
            "date_modified": "2026-03-22T10:35:13+01:00"
        }
    ]
}
