✒️ Quill

How I built this site with Claude Code

by ada · 2026-05-01 · edited 2026-05-01

How I built this site with Claude Code

This blog you're reading is itself the project. It's a small portfolio piece I assembled in one sitting with Claude Code — Anthropic's CLI for Claude. The point of writing this post is to show what's actually inside, so anyone reading can pick it apart and learn from it.

The shape of the thing

Six containers behind a single port:

:8080 ──▶ gateway (nginx) ──┬─▶ /auth/*       auth (Flask)  ─┐
                            ├─▶ /api/*        blog (Flask)  ─┼─▶ postgres
                            ├─▶ /media/*      blog (static) ─┘
                            └─▶ everything    web (Flask + Jinja2 + HTMX)

docker-compose.yml wires them together. Nginx is a thin reverse proxy that routes by path prefix. The auth service owns users. The blog service owns posts and uploaded images. The web service is the only thing rendering HTML — it talks to auth and blog over the internal Docker network. Postgres has two logical databases, one per service, so each service really does own its schema.

That's the actual microservice story: each Flask app is independently deployable, has its own migration history, and only communicates with the others over HTTP.

How auth flows between services

A user logs in. The web service POSTs to /auth/login. The auth service verifies the bcrypt hash and returns a JWT signed with HS256 and a shared secret pulled from the environment. The web service stores that JWT in an HttpOnly, SameSite=Lax session cookie.

When the user creates a post, the web service forwards the JWT in an Authorization: Bearer <token> header to the blog service. The blog service validates the JWT locally, with the same shared secret — no synchronous round-trip back to auth. This is the standard JWT story: trade revocation flexibility for latency.

The JWT itself carries the user's id (sub claim) and username (a custom claim we add at issuance). The blog service trusts both, because the signature proves they came from auth.

How pagination is wired up

Pagination lives in services/blog/app/routes.py:

@bp.get("/api/posts")
def list_posts():
    page = max(1, int(request.args.get("page", 1)))
    page_size = current_app.config["PAGE_SIZE"]
    stmt = select(Post).where(Post.published.is_(True)).order_by(desc(Post.created_at))
    total = db.session.execute(select(func.count()).select_from(stmt.subquery())).scalar_one()
    posts = db.session.execute(stmt.offset((page - 1) * page_size).limit(page_size)).scalars().all()
    return jsonify(items=[p.to_dict() for p in posts], page=page, page_size=page_size, total=total)

Two queries: one for the page slice (offset + limit), one for the total count (so the frontend can render "Page 2 of 5"). The web service consumes the count and passes it to the Jinja template, which renders prev/next buttons only when they make sense:

{% set total_pages = (total + page_size - 1) // page_size %}
{% if total_pages > 1 %}
  ...
{% endif %}

The (total + page_size - 1) // page_size is the standard ceiling-division idiom — a calculator-free way to compute ceil(total / page_size) using only integers.

How image uploads work

There are two paths that look similar but aren't:

  1. Direct API uploadPOST /api/images with a Bearer token and a multipart file field. Used by automated callers.
  2. Browser editor upload — the EasyMDE Markdown editor's image button POSTs to /upload/image on the web service, which proxies to the blog service. The browser only sees a CSRF token; the JWT lives in the HttpOnly cookie and is added server-side.

This second path matters: if the browser had to send the JWT directly to /api/images, we'd have to expose the JWT to JavaScript, which means an XSS bug in any third-party library could exfiltrate it. By proxying through the web service, the JWT never leaves the cookie.

The blog service validates the upload with Pillow:

img = PILImage.open(io.BytesIO(raw))
img.verify()                    # structural sanity check
img = PILImage.open(io.BytesIO(raw))   # reopen — verify() exhausts the stream
if img.format not in ALLOWED_FORMATS: raise ImageError(...)
if img.width > max_width:
    img = img.resize((max_width, int(img.height * max_width / img.width)), PILImage.LANCZOS)

Then it generates a fresh uuid4().hex filename, writes to a Docker named volume mounted at /var/media, and returns {"url": "/media/<filename>"}. The original filename never touches disk — that's how you avoid path traversal and "i-uploaded-a-file-named-../etc/passwd" mischief.

How safe Markdown rendering works

User-supplied Markdown can't be rendered raw — <script> would execute in everyone's browser. Two libraries do the work:

raw_html = markdown.markdown(body, extensions=["fenced_code", "tables", "nl2br", "sane_lists"])
cleaned = bleach.clean(
    raw_html,
    tags=["a", "p", "h1", "h2", "h3", "h4", "h5", "h6", "ul", "ol", "li",
          "strong", "em", "code", "pre", "blockquote", "img",
          "table", "thead", "tbody", "tr", "th", "td", "span", "div", "br", "hr"],
    attributes={"a": ["href", "title", "rel"], "img": ["src", "alt", "title", "width", "height"], ...},
    protocols=["http", "https", "data", "mailto"],
    strip=True,
)

The allowlist is paranoid by default. <script> isn't on it, so it's stripped. onclick isn't in any tag's attribute list, so it's stripped. URL schemes are restricted to known-safe ones, so javascript:alert(1) becomes a dead link. Whatever survives is what the user actually wrote.

How the test suite proves it works

Two suites, both run against the live docker compose up stack:

tests/e2e/test_api.py — black-box HTTP. Register, login, /me round-trip; anonymous gets 401 on writes; an author can full-CRUD their own posts but a different user gets 403 on edit/delete; image upload returns a /media/... URL whose bytes decode as an image; pagination yields the expected counts; slug collisions auto-suffix.

tests/e2e/test_browser.py — real Chromium via Playwright. Register flow → home; wrong password → flash visible; create post via the editor (typing into CodeMirror's API directly); edit; delete; cannot edit someone else's post (403 page); anonymous can read.

Both suites are wired into one Make target:

make test-e2e

This brings up the stack, waits for healthchecks, runs migrations, runs both suites, tears down. Exit non-zero if anything fails.

What the assistant did vs. what I did

The architecture, file structure, security choices (bleach allowlist, JWT proxy for uploads, novalidate on the form so the hidden EasyMDE textarea wouldn't block submit), the test suite, the editorial Tailwind look — all of that came out of one Claude Code session. I steered: I asked for a polished portfolio piece, picked between options where it asked, and pointed out two things that needed fixing once I clicked around (cover image needed to be uploadable, not a URL paste; the registration form was rejecting valid input because email_validator was missing from one service's requirements).

The interesting part isn't that an LLM wrote the code. It's that the workflow was: describe the system, see clarifying questions, watch a plan get written, watch the plan get executed, run the live stack, click around, file two bugs, watch them get fixed. Like working with a fast-moving collaborator who occasionally needs a code review.

If you want to try it: https://claude.com/claude-code.