Replacing WordPress with 2,300 Lines of Bun
The site you are reading is around 2,300 lines of TypeScript, Edge templates, and CSS in src/. It scored 99/90/100/100 on PageSpeed the first time I ran it, with no performance work done. It builds in under half a second and deploys to a Cloudflare Worker in under 30 seconds on warm caches. It used to be WordPress.
This is the story of why and how I replaced a perfectly working WordPress site with a hand-rolled static site generator, what I built instead, and the weird decisions that ended up being the right ones.
The old setup
The old eduwass.com was a WordPress install running locally on LocalWP, statically exported to a /static/ directory via Simply Static, pushed to GitHub, and deployed as a Cloudflare Pages project watching static/*. The theme was a custom thing pieced together with the usual plugin pile: Perfmatters for lazy loading, ASE for email cloaking, a dark-mode toggle block, Yoast for SEO, the works.
It worked. It was also constantly costing me small bits of attention: plugin updates, theme drift, a PHP-ish local dev experience, an export step that occasionally produced weirdness, a local MySQL service running for no reason when I'd rather just edit markdown.
The irony was that I was writing plain content in the WordPress editor, then the export produced HTML files, then CF served them statically. At no point in the served-to-visitor path was there any dynamic WordPress behavior. I was paying WordPress's full complexity budget for the authoring side and getting a static HTML stack anyway.
So I ripped it up.
The goal
Hard constraints I wrote down before starting:
- Every line in the repo is something I understand and can change. No "framework did this magic."
- Content stays portable. Markdown with YAML frontmatter, so if I ever want to move to Astro or Hugo, I copy
content/and plug it in. - Dev loop has to feel instant. Not "hit save, see reload a second later" instant — actually instant, with state preservation.
- No "click a link, wait for a blank page, see content" navigation feel. Prefetching, speculation rules, whatever it takes.
- Feature parity with the old site's quality-of-life stuff: email cloaking, redirect shortlinks (including cloaked ones that hide the destination), analytics, a GitHub contribution graph, syntax-highlighted code blocks.
I didn't write "deploy to Cloudflare" explicitly but it was implicit — the old site already lived there and I liked the edge-first distribution.
The stack
Five production dependencies:
edge.js templating
shiki syntax highlighting
edge-iconify icon rendering
@iconify-json/* icon data (Lucide + Simple Icons)
Bun handles everything else: it's the runtime, the bundler, the dev server, the YAML parser, the markdown parser, the TypeScript transpiler. No Vite, no Webpack, no Astro runtime, no Node.js installed. One tool.
This was a deliberate bet. Bun is new enough to be nervous about, but it's good enough at each of those individual jobs that pulling in a separate tool for each felt like adding weight for marginal quality.
The build pipeline
src/build.ts is 103 lines. Top to bottom, it:
- Wipes
dist/and rebuilds CSS and JS with content-hashed filenames. - Copies static media (avatar, favicons).
- Runs an idempotent optimization pass: any raw PNG/JPG becomes AVIF at q60; any MP4 over 1.5MB gets re-encoded with x264 CRF 28 and audio stripped. On an already-optimized repo this is a no-op.
- Loads every content collection in parallel (posts, pages, projects, site config, redirects, home content).
- Fetches the GitHub contribution graph via
gh api graphqlat build time (no token shipped; the localghCLI auth is trusted). - Constructs the speculation-rules URL list (every internal page, so the browser can prerender on hover).
- Creates one Edge instance per build with all those collections attached as globals.
- Renders each page, copies colocated assets, writes redirects, writes the sitemap, RSS feed, and robots.txt.
On a real run: 28 pages, 17 redirects, ~300ms. Warm dev server rebuilds in ~100ms.
Content as portable markdown
Content lives in content/. Every page is markdown with YAML frontmatter:
---
title: About
layout: about
links:
- title: Resume
href: /resume/
description: Check my experience.
- title: Testimonials
href: /love/
description: Kind words from clients and colleagues.
---
The layout: field names a template in src/templates/pages/. If absent, it falls back to the default prose wrapper. This is the same convention Hugo and Jekyll use, which means content/ is fully portable: rename a file, drop it into an Astro or Hugo project, and it works.
The resume is the interesting case. It's a .md file with no body — everything is structured frontmatter (name, role, experience array, education array, etc.) — and it points at layout: resume, which is a standalone Edge template that emits its own <!doctype html> (no site chrome, matches how resumes want to look).
I experimented briefly with .edge files holding both data and layout in one file — a sort of single-file page format — and it's clever but it makes content less portable. Reverting to "content is markdown, layouts are templates" was the right call.
The HMR trick
The dev server is where I spent the most design time, because I wanted it to feel better than framework dev servers, not worse.
The standard full-reload approach wastes everything. You're iterating on a CSS tweak, you scroll to the bottom of a long post to check a margin, save, and the page reloads back to the top. Every time. Intolerable.
So the HMR client (src/dev/hmr-client.ts, 90 lines) does a morph instead. On every file save:
- Fetch the current URL's new HTML.
- If the hashed CSS link changed, swap the
<link href>— browser re-fetches styles, no flash. - If the hashed JS bundle changed, inject a fresh
<script src="/app.NEW.js">. The new bundle takes over. - Use idiomorph to morph
<main class="wrap">in place. Scroll position, focus, iframes — all preserved.
The JS hot-swap is the part I'm proudest of. Client-side app code usually can't safely re-execute because it binds event listeners; running init twice doubles them. My app.ts uses an AbortSignal pattern:
function setup(signal: AbortSignal): void {
initThemeToggle(signal);
initLazyMedia(signal);
initHeatGraph(signal);
initCalWidget(signal);
initClickTracking(signal);
}
// At module top-level:
window.__appAc?.abort(); // kill the previous instance's listeners
const ac = new AbortController();
window.__appAc = ac;
setup(ac.signal);
Every addEventListener call inside setup takes { signal }. When the signal aborts, all listeners detach automatically. That means re-executing the whole bundle is safe: the new instance aborts the old one, wires up fresh listeners, and the DOM never accumulates duplicates.
In prod this costs a single property write (window.__appAc = ac) and is never read. In dev it's the entire reason live JS edits work without reload.
The result: I edit a template, a CSS rule, or a TypeScript file in src/scripts/. The page updates in under 200ms, my scroll position is preserved, the theme toggle's state survives, any dev-only toolbar I have mounted (more on that in a moment) stays alive across the swap.
Agentation for live feedback
Speaking of dev-only toolbars: for most of this rebuild I had Agentation mounted in dev. It lets me click any element on the page, write a comment, and have it delivered to the AI pair-programming in my terminal.
I paired it with a tiny webhook: the in-page toolbar's onAnnotationAdd callback POSTs the annotation to a dev-server route that appends to /tmp/agentation-events.jsonl, which Claude Code tails via a Monitor background task. Result: I drop a pin on a button, type "make this outline variant", and within seconds the change lands.
I shipped about twelve annotation-driven fixes this way during the rebuild. The feedback loop is absurdly tight — think of it as the visual equivalent of a REPL.
Critical to keeping it tight: no full page reloads. The HMR morph + AbortSignal dance means the Agentation toolbar's React tree survives every code edit. If I had to reload and re-mount it after every save, the whole loop breaks.
Feature parity with WordPress plugins
Email cloaking. The old ASE plugin used HTML entity encoding, bidirectional text reversal, and a decoy span to fool scrapers while rendering correctly to humans. I ported it to src/render/lib/email-cloak.ts in 57 lines — random entity/percent encoding, reversed halves wrapped in unicode-bidi: bidi-override; direction: rtl, a hidden decoy string between them. Any bare email in markdown gets cloaked automatically during rendering.
Redirect cloaking. Some of my shortlinks (/love → my Senja testimonials page) I don't want search engines to index or casual URL-inspectors to see at a glance. The old WP setup used a plugin for this. In the new setup, redirects declare their type:
- from: /love
to: https://senja.io/p/eduwass/x4uHU08
type: cloak
type: 302 generates a meta-refresh page plus an entry in the _redirects file for Cloudflare to handle at the edge. type: cloak generates an HTML page that embeds the target in a full-bleed iframe — the browser URL stays /love, the content is the destination.
Analytics. Self-hosted Umami, script tag in a shared components/analytics.edge partial so both the main <head> and the redirect pages include it. I also added a ~15-line click tracker to app.ts that fires umami.track("outbound", {url}) on external link clicks and track("download", {file}) on file extensions. The old WP plugin did the same thing.
GitHub contribution graph. Built at build time via gh api graphql, cached to .cache/ for an hour of fast rebuilds. The layout math is ported from the MIT-licensed heat-graph package — one function, 70 lines, no runtime dep.
Speculation rules. Declarative JSON in <head>: list every internal URL with eagerness: "moderate", and Chrome prerenders on ~200ms hover. Clicks feel instant. This isn't a plugin in the WordPress world either — it's just a browser feature nobody uses.
The deploy
I set up Cloudflare Workers (not Pages). The pitch for Workers is that one deployment serves both static assets and arbitrary dynamic code. Today the wrangler.jsonc declares pure-assets mode:
{
"name": "eduwass",
"compatibility_date": "2026-01-01",
"assets": {
"directory": "./dist",
"not_found_handling": "404-page"
}
}
No main: entry, so it's exactly a static site, same as Pages. But the day I want to add /api/hello, I add main: "src/worker.ts" and write a Worker — no migration.
GitHub Actions handles the build: install Bun, cache apt packages (libavif-bin needed for media optimization), cache Bun's install directory, run bun run build, deploy via wrangler deploy. 29 seconds end-to-end on warm caches.
One subtle piece: the workflow also runs on a daily cron (17 6 * * *, offset from the hour to avoid GitHub's :00 thundering herd). That keeps the GitHub contribution graph on the homepage fresh without me needing to push anything. The site literally rebuilds itself daily.
The numbers
- 2,289 LOC in
src/total (TS + CSS + Edge). - 5 production dependencies. Everything else is devDeps.
- 99 / 90 / 100 / 100 PageSpeed mobile, out of the box.
- 0.8s First Contentful Paint, 1.8s LCP, 0 CLS, 0ms TBT.
- ~300ms full cold build. ~100ms warm incremental rebuild in dev.
- 29s GitHub Actions end-to-end deploy on warm caches.
I did not spend a single minute on performance optimization. These numbers fell out of: small client bundle (~7KB app.ts), system fonts, lazy media, content-hashed assets with long cache headers, speculation rules, content-visibility on below-the-fold grids. Good defaults compound.
What I'd do differently
I flip-flopped on one design decision four times over the course of the rebuild: colocation (single .edge file with data-and-template) versus separation (.md content in content/, .edge layouts in src/templates/). I tried both, landed on colocation, got uncomfortable with the mixed extensions, and ended up back at separation. The lesson is that the thing that felt clever (one file for resume data + resume layout) was actually costing portability — and portability is the explicit goal. Don't get attached to clever structures when they fight your own constraints.
The other thing I'd tighten: I wrote a lot of comments explaining what the code does, then swept through and removed 250 lines of restating-the-obvious, keeping only the ones that explain why something non-obvious exists. The finished codebase is genuinely easier to navigate. If I'd been disciplined about this from the start, it would have saved time.
What's next
Nothing, really. This post is the twelfth entry. The site loads under a second. The dev loop is faster than it needs to be. I can write posts in markdown, deploy by pushing to main, and forget about it until the next idea.
If you're considering a rewrite like this: the question isn't whether a tiny custom stack is technically feasible — it's whether you'll use the time you save not maintaining it on things worth doing. For me, the answer is yes. For a site with frequent contributors, a CMS with dashboard logins, or anything resembling a commerce flow: this is the wrong answer and you should use a framework.
For a personal site where you're the only one touching it and you value owning every line, 2,300 lines is plenty.
Read other posts →