Aidin Niavarani

I help software product teams use AI to ship faster and build better.

Embedded technical leadership for product teams of 5–30.

16+ years shippingCo-founded NextGen Kitchens ($1M/mo)Production AI: RAG · Recommendations · Forecasting · Agent systemsBased in Vancouver

Most of the people I work with arrive through someone they trust. If that's how you got here — welcome.

The Shift

Most software teams feel like AI should be helping more than it is.

Your team has ChatGPT subscriptions, maybe a Cursor license or two, probably a Copilot seat. Someone read the Anthropic economic index memo and someone else watched a keynote. There's a Slack channel called #ai where people share links. And yet — your delivery cycle hasn't meaningfully changed. Requirements still take too long. Specs still drift. Testing is still mostly manual. Alignment between product and engineering still leaks hours every week.

This isn't a you problem. Most teams are here. The tools are real, the hype is real, and the gap between 'we use AI' and 'AI actually compresses our delivery cycle' is also real. Closing that gap is less about picking the right tool and more about rewiring how your team moves from idea to shipped feature — which parts of the cycle AI makes faster, which parts it quietly makes worse, and where to start.

I've done this work on my own team and with others. The pattern is consistent enough that I wanted to write it down.

What Works

Seven places AI actually compresses the SDLC. And three where it doesn't.

What follows is the short version of what I've seen work across my own team and the teams I've advised over the last two years. If you only read one section on this page, read this one. If you want to talk after, the form at the bottom is open. If you don't — take what's useful, leave the rest, and ship something good.

01 / 07

Requirements synthesis.

The fuzzy-to-concrete translation is where most teams bleed weeks.

The PM tool runs AI over every requirements doc to check that the technical spec actually matches the product intent — and that summaries read cleanly instead of as bullet soup. The failure mode that taught me this: a shipped feature where the product doc and the engineering ticket diverged by one assumption, and nobody caught it until QA. The check is boring. That's why it works.

How to try it

Pick your next mid-size feature. Drop every related Slack thread, doc, and interview note into a single context window. Ask for a structured spec. Review the output, don't edit in place — note where it got things wrong. The diff is what tells you what your team actually knows that isn't written down.

02 / 07

Spec-to-ticket decomposition.

Breaking specs into engineering tickets is pattern-matching, which is what LLMs are good at.

Once you have a spec, the next ten hours of someone's week is converting it into Jira/Linear/GitHub issues with the right acceptance criteria, dependencies, and estimates. This is a template-shaped task. Feed the spec in, get issues out, have the tech lead adjust. A good setup turns a two-day process into a two-hour one, and the lead spends their time on the architectural calls rather than the formatting.

How to try it

For your next sprint, generate the ticket list from the spec with AI first. Compare to what your team would have written manually. Keep whatever's better, trust the process once the signal is clear.

03 / 07

Test scaffolding.

Not the tests themselves. The boilerplate around them.

AI writing tests unsupervised is a liability — the tests pass because they were written to pass, not because the code is correct. But AI writing the scaffolding around tests (setup, teardown, mocks, fixtures, parametrization) is reliably useful. Your engineers write the meaningful assertions; the AI does the yak-shaving. This alone recovers 15–25% of the time a typical engineer spends 'writing tests.'

How to try it

Next time someone writes a new test file, have them generate the scaffolding first, then write the actual assertions by hand. Most will notice the quality-of-life jump within a week.

04 / 07

Code review assistance.

AI as first-pass reviewer, human as final reviewer.

A well-prompted AI catches the boring bugs — null handling, obvious performance issues, missing error paths, style drift — faster and more consistently than a tired senior engineer at 4pm. This doesn't replace human review; it protects it. Your senior people get cleaner PRs, which means their review time goes to the things that actually matter (architecture, naming, whether this is the right abstraction).

How to try it

Wire an AI reviewer into one repo as a required check for one week. Tune the prompt with what it misses and what it over-flags. By week two, your humans are reviewing better code.

05 / 07

Internal documentation that stays current.

The problem with docs was never writing them. It was keeping them true.

Docs were never the problem. Searchable context was. Once I treated "complete context the AI can actually find" as the load-bearing input — not the docs themselves — the same material started doing work across onboarding, spec writing, code review, and test planning. If the AI can't retrieve it, it might as well not exist. That's the bar.

How to try it

Index your main repo + your specs folder into a vector store. Hook up a simple chat interface. Onboard your next new hire using it exclusively for the first week and see what they ask.

06 / 07

PM/engineering alignment automation.

The costly part isn't the meetings. It's the context drift between them.

The costly part isn't the meetings — it's the context drift between them. What fixed it for us was centralizing: one place to look, searchable, same source of truth for PM and engineering. The unlock wasn't the integration surface area, it was the discipline of documenting our existing process first and then pointing the system at it. The first version skipped that step and became unusable within a quarter.

How to try it

Pick one system boundary — say, Notion specs → GitHub issues. Automate the one-way sync with AI. Measure how many clarification questions get asked in standup the following sprint.

07 / 07

Onboarding new engineers.

Onboarding is a search problem before it's a training problem.

A new engineer's first month is mostly: 'where does this live, who owns this, why is it this way, what does this mean.' Every one of these is a query. A good internal RAG + a clear architecture doc + a new-hire prompt template can collapse a 4-week onboarding into 1.5 weeks of productive output. Your senior people stop answering the same questions, and your new hires stop feeling like they're in the way.

How to try it

Before your next hire starts, record a 30-minute architecture walkthrough and transcribe it. Index it alongside your codebase. See what the new engineer asks in week one versus what previous hires asked.

And three where it doesn't.

01 / 03

Replacing senior engineering judgment.

AI is bad at deciding what not to build.

The single most valuable thing a senior engineer does is say 'we shouldn't build that' or 'we should build this differently.' This is taste plus context plus organizational read. AI is unreliable here — it's trained to be helpful, so it produces the thing you asked for even when the thing you asked for is wrong. Teams that replace architectural judgment with AI output end up with technically working systems that are pointed in the wrong direction, and the cost compounds over months. Protect this role. Use AI to make your senior engineers faster at the things they already decide to do; don't use it to decide.

How to avoid the trap

When an AI suggests an architectural direction, treat it as one opinion among your team's, not the answer. If your senior people consistently disagree with it, they're probably right.

02 / 03

Customer discovery and user research.

The goal of user research isn't to save time. It's to change your mind.

AI can summarize customer interviews beautifully. What it can't do is notice the thing in the interview the interviewer didn't ask about — the pause, the topic shift, the thing the user didn't say. Summarization compresses the research into what you were already looking for, which means the thing you weren't looking for gets lost. Teams that 'AI-automate' their discovery end up very efficient at finding what they already believed. Use AI to organize research; don't use it to replace the listening.

How to avoid the trap

If you're using AI to summarize interviews, also keep the raw transcripts and re-read one fully every two weeks. You'll notice things the summary removed.

03 / 03

High-stakes error handling and edge cases.

Confident-sounding code is the failure mode, not the feature.

AI writes code that looks right. For happy paths, this is great. For the places your system can genuinely hurt a user — payments, auth, data integrity, anything regulated — looking right is actively dangerous. An AI-generated error handler that catches the wrong exception class fails silently in production, and the bug doesn't surface until the incident. For these paths, slow down. Human-written, human-reviewed, explicitly tested. The 30% AI-speedup isn't worth the 100x risk downstream.

How to avoid the trap

Mark a 'here be dragons' boundary in your codebase. Auth, payments, data migrations, anything irreversible. AI assistance in these modules is fine for tests and boilerplate. The logic stays human.

That's the short version. If your team is somewhere in here, the form at the bottom of the page is the easiest way to talk.

Ways to Work

Three shapes an engagement usually takes.

The right shape depends on where your team is and what you're carrying. Most engagements start as one of the three below and sometimes move between them as things change. Typical engagements run mid-five-figures to low-six-figures. Happy to get specific once I understand what you're working on.

Fractional CTOfor teams in the middle of something real.

Embedded technical leadership, part-time, ongoing. I'm in your standups, your PRs, your architecture calls, and your hiring loops. The engagement runs weeks to months depending on what you're shipping and how much momentum you need me carrying.

In practice this looks like: 2–3 sync meetings a week, active sprint participation, code reviews with real feedback, and a monthly sync with your leadership on roadmap health. I work in your tools — Linear, Jira, GitHub, Slack, Notion — and adapt to your cadence rather than importing mine.

Sound like you? That's probably the right shape. Send me what you're working on

Advisorfor teams that have the hands but want a second brain.

Lighter touch. A 1-hour strategy sync per month, async availability for the calls that matter, and a standing invitation to pull me in when something gets stuck. You make the decisions and execute; I think with you on the ones that are worth thinking twice about.

This works best when your senior people are strong but there's nobody at the CTO layer to pressure-test architectural direction, hiring calls, or the hard technical trade-offs. I'm a sounding board with skin in your success, not a consultant with a quarterly check-in.

Sound like you? Let's talk about what that would look like. Send me a note

The Diagnosticfor teams that want clarity before commitment.

A focused two-to-three week engagement. I come in, read your codebase, interview your PM and tech leads, map how your team actually moves from idea to shipped feature, and identify exactly where AI has real leverage in your SDLC — and where it's a distraction. You leave with a prioritized plan your team can run on its own.

This is often the first step when the shape of the right ongoing engagement isn't obvious yet. Some teams do this and execute the plan themselves. Others bring me back for ongoing support once the priorities are clear. Either is fine.

Sound like you? This is usually how we'd start. Send me a note

Case Study

How we cut our delivery cycle from 6 weeks to 2.5 at NextGen.

I co-founded NextGen Kitchens in late 2022 and held CTO + acting CPO for three-plus years. We are processing $1M/month in transactions across six locations including food halls, airports, and malls. What follows is one specific piece of that story — how my team shipped a major product iteration in 2.5 weeks using the framework I now bring into other engagements. I'm including what didn't work because that part is actually more useful than the wins.

01

The starting point

Early 2026. We were about to ship a significant rework of our dashboard and kitchen display system — new data model, new UX, integration with three external systems. Our normal velocity for something this size was four weeks end to end. We had four weeks. Investors wanted it sooner. Standard startup math.

I mapped our SDLC honestly before we started. The hotspot wasn't coding — our engineers were fine. It was everything around the coding: requirements were living in three places, specs took a week of back-and-forth, and tests were mostly written after the fact under time pressure. The coding itself was maybe 40% of the calendar time.

02

What we changed

We pulled four things into the process from my own experiments: AI-assisted requirements synthesis (spec v1 generated from interview notes + existing docs), template-based spec generation (same prompt, same output format, reviewed by the PM in hours not days), scaffolding-first test generation (AI did the plumbing, engineers wrote the assertions), and a one-way sync between our Notion specs and GitHub issues so nothing drifted in translation.

None of this was revolutionary — each piece is a standard workflow these days. The point was running all four together on the same project, so the compounding was real. The meta-change was that my team trusted the framework because I'd already been running it privately on smaller features for a month.

03

The result

6 → 2.5 wks

cycle time

Same team size

no new hires

Fewer bugs

measured across launch week

We shipped in 2.5 weeks. Launch was cleaner than the previous iteration despite moving faster — partly because the specs were tighter going in, partly because the test scaffolding caught issues earlier. I wrote up the framework internally afterward and that document became the spine of what I now bring into Diagnostic engagements with other teams.

04

What didn't work

The first version of the PM dashboard was a mistake. I built it too fast. It was wired deep into our tools — GitHub, Slack, Notion, Google Docs — and it was useful for a while. Then priorities started shifting, which is what happens on any real team, and the system didn't flex with them. It became hard to use in exactly the environment it was built for.

What I got wrong going in: I thought the system itself was the hard part. It wasn't. The harder parts were getting the team to actually use it, centralizing where people looked for things, and earning trust from the senior engineers who — reasonably — didn't yet trust the AI. The tool was the easier half. The people part was the half that decided whether any of this worked.

What I'd do differently: document the existing process first, then build. We ended up having to go back and do that work anyway — review our processes, write them down, then transfer them into the system — and the rebuild is what started producing real results. I'd also accept sooner that some parts were over-engineered. Sometimes the manual way is better, and knowing which parts those are is its own skill. I'm still learning it.

The framework isn't magic and it doesn't map identically onto every team. The first thing I do in an engagement is figure out what version of this fits your stack, your team, and what you're trying to ship. Sometimes the answer is 'most of it,' sometimes it's 'one piece of this, hard.' The point is the honesty about what actually compresses the cycle versus what just looks good in a slide.

About

Where the curiosity came from.

I was born curious. I talked my dad into teaching me QBASIC when I was eleven, spent that year writing tiny games and passing floppy disks around the schoolyard, and drew blueprints for a robot that was going to help my mom with her cooking — she cooked a lot, and I was convinced the future would fix it. The robot never shipped. The habit did.

The CV version of the sixteen years since is: full-stack engineer out of school learning every layer I could get my hands on (frontend, backend, databases, networking, devops), consulting for smaller and mid-sized teams, a couple of startups of my own that taught me more than they paid, and co-founding NextGen Kitchens in late 2022 — CTO and acting CPO for three-plus years, zero to a million a month in transactions, production AI across recommendations, forecasting, RAG, and agent orchestration. Accurate. Also somewhat beside the point.

What I actually care about is the overlap between business and tech — how a product with the right instincts can quietly outcompete a company ten times its size. That's what played out at NextGen. Our customers had better-resourced options and chose us anyway, because we were more curious about what they actually needed and more willing to let the product reflect that. That lesson is what I bring into other teams now: curiosity about the user, honesty about what's working, and the patience to build the thing that fits instead of the thing that's in the slide.

Outside the work, I'm a dad and a husband. I live in Vancouver which is as beautiful as people say, and I met my wife backpacking in Southeast Asia — which explains a lot about how I travel and not much about how I work. I love what tech can do and I worry about what it's doing to how we relate to each other and to the natural world. I'm not sure how to hold both of those at once. I suspect the answer has something to do with building the next wave a little more carefully than the last one.

Most of my time right now is fractional and advisory. I'm also open to senior technical leadership roles for the right team — if that's you, write to me directly. If we end up working together, you'll find I care about your team and your users in a way that's probably slightly inefficient from a consulting standpoint. That's the point.

Community

A quiet project I care about.

Alongside the client work, I'm slowly building something closer to a community than a customer list. Product folks and engineering leads in Vancouver and remote — people working through the same AI shift on their own teams and wanting someone to think alongside. Not a course, not a paid cohort. Occasional conversations, an email list that's more digest than newsletter, and the odd in-person thing in Vancouver when it makes sense.

If that sounds useful, the email form at the bottom of this page is also how you get on the list. Low volume. No pitches. If we work together one day that'd be great; if we don't, I still want to help the people figuring this out to not have to figure it out alone.

Contact

Send me what you're stuck on.

One line or a paragraph, either works. I read every message personally and reply within a couple of days.

I read every message. No newsletter, no sales sequence.