Skip to main content
DesignKey Studio
2025 in Review — What We Shipped and What We Learned — featured article image
General
December 29, 2025
5 min read
By Daniel Killyevo

2025 in Review — What We Shipped and What We Learned

A candid year-end look at what DesignKey Studio shipped in 2025, what surprised us, what we'd do differently, and where we think the work is heading.

year-in-reviewagency-opsreflection

It's the week between Christmas and New Year, the quietest stretch of the year for most software shops, and the first moment since January we've had to look up from the work and think about it. This is the short, honest version of what 2025 was for us — what we shipped, what surprised us, and the handful of things we'd do differently if we got a do-over.

What we shipped

No client names or numbers — those conversations belong to the clients, not our year-end post — but the shape of the work is worth describing because it shifted meaningfully over the year.

Custom SaaS builds remained our largest category, mostly mid-stage startups and vertical operators building or rebuilding their core product. These projects got shorter and sharper in 2025. A build that would have been a six-month engagement in 2023 was often a three-month engagement this year — not because we cut corners, but because our tooling and the clients' clarity both got better.

AI integrations went from "one-off experiment" to a real practice line. Most of this work sits under the categories we wrote about in our AI integration guide — augmented workflows inside existing SaaS, retrieval-backed assistants, and a handful of more ambitious AI-native features. The volume of these projects roughly tripled over the year; the success rate was better than we feared, largely because we got more disciplined about the readiness conversation before scoping.

UX and redesign work filled the calendar in a way we didn't fully predict. A meaningful number of engagements started as "we want to add AI features" and turned into "we want to redesign the product to accommodate AI features properly." The design discipline around AI UX — loading, streaming, uncertainty, correction, handoff — matured visibly over the year.

Local Tampa work grew. More direct referrals, more in-person meetings, more projects where the geography of the client and team actually mattered. Our bet on being a Florida-based agency with a local story paid off in ways that remote-first competitors couldn't match.

What surprised us

How quickly AI tooling collapsed build timelines. At the start of 2025 we had a working hypothesis that AI-assisted development would be modestly useful for experienced engineers and occasionally useful for less experienced ones. By mid-year that was visibly wrong. For the tasks where AI assistance is genuinely good — scaffolding, test writing, translation between frameworks, documentation, boilerplate — we're seeing compounding gains that don't look like they've plateaued. Planning a 2026 build the way we planned a 2023 build is now a mistake.

Client appetite for AI — with teeth. Two years ago, "AI strategy" meetings were theatrical. Leaders wanted to be seen considering it. This year, the meetings had budgets attached and concrete problems to solve. More importantly, the quality of the conversations went up — more clients arrived with a defined workflow to improve rather than "we should do something with AI." When founders and operators show up with a real problem and a willingness to measure, the work is both easier and more valuable.

How fast "boring AI" won over "exciting AI." The integrations that worked in 2025 overwhelmingly looked like boring software: extract fields from documents, classify tickets, draft replies, search content. The flashy autonomous-agent pitches mostly didn't survive the readiness check — meaning the work is starting to match the technology.

How much of the value was in the supporting engineering. We kept writing that 80% of AI integration is normal software engineering, and 2025 underlined it. Projects that shipped well had disciplined data prep, proper evals, clear ownership, and thoughtful UX. The ones that didn't had none of those, regardless of model choice.

What we'd do differently

A short list, in rough order of how much the oversight cost us:

Start the eval conversation earlier. In several projects this year, we treated evals as a build-phase concern and should have treated them as a scoping-phase concern. Clients whose first exposure to "we need a golden set" happened in week six were less enthusiastic about it than clients whose first exposure was in week one. Worth pushing left.

Say "not yet" to more AI projects. When a readiness check came back yellow, we sometimes proceeded anyway because the client was motivated. In retrospect, a few of those projects would have gone better six months later with the prerequisites in place. Motivation is nice; readiness is load-bearing.

Invest earlier in internal tooling. We built meaningful internal tools for ourselves this year — eval harnesses, scaffolding, reusable retrieval patterns — but we built each the first time we needed it rather than the third. That's roughly doubled some project kickoff times. Going into 2026 we have a much cleaner internal baseline.

Write more publicly. We had strong opinions about AI integration patterns six months before we put them in print. The clients we ended up working with this year almost universally had read something we'd written. More writing, earlier, more specifically.

What we won't change

Some things that worked and will keep working:

  • Small, senior teams per engagement. We don't staff junior-heavy projects, and we don't plan to.
  • Fixed discovery, flexible build. Every meaningful engagement starts with a scoped discovery phase before any build estimate gets committed.
  • No "AI strategy deck" engagements. If the work isn't going to result in something shipped, we aren't the right partner.
  • Tampa-based, in-person when it helps. Remote when remote is better. In person when that's what the project needs.

Where we think 2026 goes

We try not to predict — most 2025 predictions we read a year ago now read as obvious or wrong. A few shapes feel stable enough to point at: the gap between teams that ship AI integrations with discipline and teams that ship them at all is going to widen. The timeline compression we saw this year hasn't flowed into pricing or client expectations yet; that catches up in 2026. And the vertical SaaS opportunity feels stronger than it has in years — the cost to build one has dropped faster than the cost to find customers for one.

Mostly, we're grateful to the clients who trusted us with their problems this year, the team who made the work possible, and the readers who push back on these posts. More work, more writing, more slow compounding in 2026.

Thanks for a good year.

If you've got a 2026 project you'd like to talk through — software, AI, or otherwise — we'd love to hear about it. Say hello.

Share this article

Author
DK

Daniel Killyevo

Founder

Building cutting-edge software solutions for businesses worldwide.

Contact Us

Let's have a conversation!

Fill out the form, and tell us about your expectations.
We'll get back to you to answer all questions and help to chart the course of your project.

How does it work?

1

Our solution expert will analyze your requirements and get back to you in 3 business days.

2

If necessary, we can sign a mutual NDA and discuss the project in more detail during a call.

3

You'll receive an initial estimate and our suggestions for your project within 3-5 business days.