Git, Claude Code, and why a designer should care
The mental model, the tooling, the first session.
Generating design variations — the TJUG case study
One countdown card, 14 variations, real feedback loops.
From 14 variants to a shipped component
Narrowing down, wiring the winner into production.
Part 1 set up the mental model. Part 2 produced fourteen variations on a private URL. Part 3 is where design exploration turns into something real users see on the home page. That transition is where most AI workflows fall apart — and where a small amount of discipline pays back every hour you saved earlier.
The rules are the same ones any developer uses. You do not need to know how to write a pull request from scratch — Claude can draft one for you. You just need to know what a good one looks like when you read it, and what to check before a developer clicks merge.
What this part gets you
Picking a winner (or two, or none)
Fourteen variants is a menu, not a ranking. The point was never to pick the most beautiful one — it was to have enough variety that the right answer for the job can reveal itself against the wrong ones. Before you pick, re-anchor on the brief.
Who will see it?
Returning visitors on mobile? New visitors on a desktop homepage? A variant that works for one can fail for the other.
What action do you want?
Awareness, click-through, dismiss-and-remember? Each has a different shape of ideal component.
What must not happen?
Does this block content? Compete with the nav? Break on a slow connection? Rule out candidates that break the host page.
When the TJUG team ran through these three questions, the answer was surprising. The most visually striking variants — Neon, Ambient Hero, Split Gradient — were impressive as hero surfaces, but the home page already had a hero. What was missing was a low-friction, dismissible signal above the nav. That is V8, the Announcement Strip.

You do not have to pick just one
Wiring the winner into production
The preview page was a safe sandbox. Production wiring is different — it has to account for when the component should appear, what data it depends on, and how a user can opt out. This is where you give Claude one more brief and let it scope the work.
The production brief
Add V8 to the landing page so it sticks to the top above the top nav, and only gets displayed if there is an event scheduled on Meetup. Also make it dismissible, and remember the dismissal until a new event is scheduled.
Claude turned that sentence into four concrete engineering decisions. It is worth understanding them, because they are the scaffolding every future production wiring will need:
Server-side data
The root layout becomes async. It fetches the next event from Meetup on the server, so the banner decides whether to render before the first paint. No flash of empty banner on load.
Client-side freshness
An effect re-checks /api/events after mount. If the ISR cache was stale, the banner updates to the correct event. If Meetup says "no upcoming event," the banner exits smoothly.
Sticky layout
Banner and header are wrapped in a shared sticky container so they scroll together as one unit. Body overflow switches from hidden to clip because the older value silently breaks sticky positioning — a subtle CSS gotcha Claude flagged and fixed on its own.
Dismissal memory
On dismiss, a localStorage key is written, namespaced to the event date. Next time Meetup schedules a new event, the key does not match and the banner shows again. Per-event memory without any server work.
These four decisions are invisible in the final screenshot but essential to the work being production-quality. Claude made them; the designer reviewed the diff and approved.

You do not need to review the engineering
The handoff pack — what goes in a PR
A pull request is where design work becomes a conversation with engineering. The PR body is the single artifact the reviewer reads before anything else. Make it obvious what changed, why, and how to verify it. Claude can draft the whole thing; your job is to make sure the content matches what you actually did.
Title — one line, imperative, specific
Bad: "Update hero." Good: "Ship V8 — sticky event announcement banner on the home page." The reviewer should know exactly what merges.
Summary — three to five bullets
What the user sees. What the code introduces. Any behavior changes. Skip implementation detail — the diff has that.
Test plan — a checklist a reviewer can run
One line per scenario. "Banner renders when Meetup has an event." "Dismiss persists after reload." "Does not render on /contact." This is your proof that you tested.
Screenshots — before and after
Two images, side by side when possible. Desktop and mobile if the breakpoints matter. A reviewer should see the visual change in two seconds.
Linked issue or brief
If there was a ticket, link it. The PR becomes part of the design decision record and will be findable by the next person who asks "why is this here?"
Ask Claude to draft the PR body
Once the work is done, the prompt is literally: “Draft a pull request description for this branch. Include a summary, a test plan, and a screenshot placeholder.” Claude reads the commits and writes the PR description in about thirty seconds. You review and publish.
Ship V8 — sticky event announcement banner on the home page
designer-daniel wants to merge feature/countdown-card-variants into main
Summary
Adds a V8 announcement strip above the top nav on the home page. Only renders when Meetup has an upcoming event; dismissal persists in localStorage keyed by event date.
Test plan
- ✓Banner appears above the nav when Meetup has an upcoming event
- ✓Countdown ticks every second
- ✓Dismiss button hides the banner and persists on reload
- ✓Banner does not appear on routes other than /
- ✓Mobile breakpoint does not overflow
- Typecheckpassed
- Lintpassed
- Buildpassed
- Unit testspassed
- Visual regressionpassed
Reviewers
- eng-leadApproved
- design-partnerPending
Labels
Linked
Issue #47 · Next-meetup surface
The title
One line. Written in imperative ("Ship V8 …"). What merges is what it says.
The description + test plan
Summary of what, and a checklist of what a reviewer should verify.
Reviewers + labels
Who has to say yes. What buckets this PR lives in.
The merge gate
Green when checks pass and reviewers approve. Nothing happens until this is clicked.
Two lists — yours, and the developer’s
A PR goes through two reviews: yours (before you publish) and a developer’s (before they merge). They look at different things. Run your list first so the developer’s list has less to catch.
Your checklist (before you publish)
The developer's checklist (before they merge)
These checks follow you
If a check fails, ask Claude to fix it
What happens after merge
Merging is not the end of the job — it is the start of observation. The first 24 hours after a change lands are when you learn whether the decision held up. Three things to watch:
Check production yourself
Five minutes after merge, open the site in an incognito window. Walk through the flow you shipped. Things that looked fine on localhost occasionally misbehave on production because of CDN caching or environment differences. Catch it now, not next week.
Look at the data, not just the pixels
Is click-through changing? Dismissal rate? Bounce? Analytics is the other half of the feedback loop. If a variant is beautiful and nobody interacts with it, that is data about the direction, not about the component.
Capture what you learned
Drop a short note in the design channel: what you shipped, what surprised you, what you would do differently. Future-you and the next designer benefit more from the note than from the artifact.
The payoff compounds
What the three parts gave you
Zoom out. The series covered three things:
A mental model
Branches as sandboxes, commits as snapshots, GitHub Desktop as your review window. Enough to follow any engineering conversation without the jargon.
A repeatable loop
Brief → generate → review → iterate → ship. Every step has a shape and a signal for when to move on. You can run this on any component, any project.
A handoff discipline
The PR body, the test plan, the two-list review. The difference between a fun experiment and a change that ends up in production.
None of it depends on command-line fluency. None of it depends on knowing React. It depends on the three skills designers already have: a clear brief, a strong review eye, and the discipline to not ship something you have not actually seen.
Keep the cheat sheet nearby
The Claude Code Cheat Sheet for Designers is meant to live in an always-open tab. Copy-ready prompt patterns for the seven most common design tasks, the nine slash commands worth remembering, and links to the UI/UX skills on GitHub worth bookmarking. Return to it often.
You finished the series
Now go make something
Pick a component on your current project that has been bothering you. Open Claude Code, brief it with one sentence, and see what happens. The first session is the scariest. After that it is just another tool in the kit.
