Most founders think of software development as a single event, you hire someone, they build the thing, and it goes live. The reality is a seven-phase process where the most expensive mistakes happen in the phases most people never plan for.
Understanding each phase does not require a computer science degree. It requires knowing what decisions get made when, what those decisions cost to reverse later, and which phases are safe to compress versus which ones cause catastrophic rework if rushed. This article walks through all seven.
What are the major phases from idea through post-launch support?
A complete product lifecycle has seven phases that every product goes through, regardless of complexity.
Discovery is where the idea gets pressure-tested against reality. Design turns the validated idea into a visual blueprint. Development is where code gets written. QA testing checks that the code works as intended. Deployment moves the code from a development environment to a live server. Launch puts the product in front of users. Ongoing support keeps it running, secure, and improving.
The phases are sequential, skipping or compressing one pushes its problems into the next. A feature that was not tested properly in QA does not disappear. It becomes a bug report from an actual customer at the worst possible time.
| Phase | Typical duration | % of total project cost | What gets produced |
|---|---|---|---|
| Discovery | 1-2 weeks | 8-12% | Requirements document, user stories, feasibility assessment |
| Design | 1-3 weeks | 10-15% | Wireframes, visual mockups, design system |
| Development | 4-12 weeks | 50-60% | Working code, database, integrations |
| QA testing | 1-3 weeks | 10-15% | Bug reports, test coverage, signed-off features |
| Deployment | 3-5 days | 3-5% | Live production environment |
| Launch | 1 week | 2-4% | User onboarding, monitoring setup |
| Ongoing support | Ongoing | 15-20% per year | Updates, fixes, new features |
A McKinsey analysis of 5,400 software projects found that 45% ran at least 45% over budget and 7% ran over schedule by more than 100%. The most common culprits: requirements that were not locked in discovery, design changes made mid-development, and testing that was compressed to hit a launch date.
How does the discovery phase prevent wasted development effort?
Every hour spent in discovery saves roughly six in development. That ratio comes from IBM's well-documented systems engineering research: a requirements defect costs $1 to fix during discovery, $6-$10 during development, and $100 or more after launch.
Discovery answers four questions before a single design mockup or line of code exists. Who is the user and what problem are they solving? What features are needed for launch versus what can wait? What does success look like in measurable terms? What technical or regulatory constraints exist?
The output is a written requirements document, not a conversation summary or a slide deck. A document that specifies every feature, every user flow, and every edge case the system needs to handle. A feature that does not appear in the requirements document does not get built. This boundary is what prevents scope creep, which GoodFirms found was the cause of budget overruns in 60% of software projects.
For a typical MVP, discovery runs one to two weeks. A Western agency charges $5,000-$15,000 for this phase alone. A global engineering team with experienced product managers delivers the same output for $2,000-$4,000.
One concrete example: a healthcare startup comes to discovery with the idea of building a patient appointment platform. Without discovery, a developer starts building a booking system. With proper discovery, the team learns the client wants patients to also receive automated SMS reminders, that the product needs to connect to three existing clinic management systems, and that two of those systems have APIs documented only in Spanish. Those three facts change the build estimate by 40%. Better to find out in week one than week six.
What happens during the design phase before a single line of code is written?
The design phase produces two things: wireframes and mockups. Founders sometimes treat them as identical. They are not.
Wireframes are structural blueprints, black-and-white layouts that show where each element appears on each screen, how users move between screens, and what happens when they click a button. They have no color, no typography, no branding. The purpose is to get agreement on structure before any visual work begins. Changing a wireframe takes minutes. Changing a fully designed screen takes hours.
Mockups take the approved wireframes and add the full visual treatment: your brand colors, your fonts, the exact spacing between every element. This is what the final product will look like. A developer working from mockups has no visual decisions to make, their job is to build exactly what is on the screen.
Skipping directly to mockups without wireframes is a common mistake that causes expensive revision cycles. The InVision 2021 Design Maturity Report found that teams using structured wireframing before high-fidelity design reduced their design revision cycles by 38%.
The design phase also produces a design system, a library of reusable components (buttons, form fields, navigation elements) that maintain visual consistency across every screen. A product built without a design system looks inconsistent as it grows. Buttons appear in four slightly different sizes. Form fields have three different border styles. Users notice even if they cannot name what feels off.
For a 10-screen MVP, design runs two to three weeks and costs $3,000-$6,000 from a global engineering team. A Western design agency charges $8,000-$20,000 for the same scope.
How does the development phase break down into sprints or milestones?
Development rarely runs as one unbroken block. Most teams organize it into sprints, focused, time-boxed periods of work, typically one to two weeks each, where the team builds a defined set of features and delivers something testable at the end.
A two-week sprint for a marketplace MVP might deliver: user registration and login, seller profile creation, and the product listing page. The client sees and approves those features before the next sprint begins. If something needs to change, it changes before another two weeks of work has built on top of it.
The alternative, a waterfall approach where development runs for three months before anything is shown to the client, produces a product that matches the original spec but not the client's actual intent. Founders change their minds. Markets shift. Competitors launch. Sprints surface those changes early when course-correcting is cheap.
| Development model | Visibility | Change cost | Risk level | Best for |
|---|---|---|---|---|
| Agile sprints (1-2 week cycles) | Weekly demos | Low, changes made before features compound | Low | Most startups and MVPs |
| Milestone-based (monthly) | Monthly reviews | Medium, changes affect the next milestone | Medium | Projects with stable requirements |
| Waterfall (build then review) | End of project | High, changes require rework across many features | High | Regulated industries with fixed specs |
A 2021 Standish Group Chaos Report found that agile projects succeeded at a rate of 42% versus 26% for waterfall projects. The gap is not about methodology preference, it is about feedback loops. Agile surfaces problems early. Waterfall surfaces them after the budget is spent.
For budget planning: a 10-week development phase for a mid-complexity product costs $18,000-$25,000 with a global engineering team. The same scope costs $60,000-$90,000 at a Western agency. The gap is not about corners cut, it is about what senior developers cost per month in San Francisco ($13,000-$17,000) versus what they cost on a global team with equivalent experience ($3,000-$5,000).
What does quality assurance testing catch that developers miss?
Developers test their own code, but they test it the way they wrote it, using the inputs they anticipated and the paths they designed. QA engineers test it the way users actually behave, which is rarely the way developers expect.
A QA engineer fills out a form with 500 characters in a field designed for 50. They submit it twice. They click the back button mid-checkout. They log in on two devices simultaneously and change their email address on one. These are the edge cases that crash systems in production and that developers, writing code, do not think to check.
Professional QA runs two types of testing in parallel. Automated testing uses scripts that check every feature, every time a change is made. A change to the payment flow automatically re-runs every test that touches payments. Manual testing covers what automation cannot: whether the product feels intuitive, whether button labels make sense, whether the visual design holds up on a 2016 Android phone.
The IBM Systems Sciences Institute estimated that finding and fixing a bug after launch costs 100x more than finding it during development. QA is not overhead, it is insurance against the most expensive category of development cost.
For a 10-screen MVP, a thorough QA cycle runs one to two weeks and catches 80-120 bugs of varying severity. Most are minor. A small number are blockers that would have corrupted production data or locked users out of their accounts. Western agencies charge $5,000-$12,000 for QA on a project of this size. A global QA team delivers the same coverage for $2,000-$4,500.
How does deployment move code to production?
Deployment is the process of moving code from a development server, where the team works and tests, to a production server, where real users interact with it. The mechanics are invisible to end users when done well. When done poorly, users see error pages, lost data, or a site that is suddenly unavailable.
Professional deployment uses an automated pipeline: every code change is checked automatically before it can reach the production server. The check runs all tests. If any test fails, the change stops and the developer is notified. Nothing broken reaches users.
There is also a staging server, an exact copy of the production environment where the team does final verification before going live. The sequence is: development, automated checks, staging, final human review, production. Skipping staging is where most deployment disasters originate.
| Deployment practice | Risk without it | Time to fix when it goes wrong |
|---|---|---|
| Automated test pipeline | A breaking change reaches real users | Hours to days depending on severity |
| Staging environment | Bugs appear only in production, not caught in testing | Same as above |
| Zero-downtime deployment | Users see "site under maintenance" during every update | N/A, it just looks unprofessional every time |
| Rollback capability | A bad release cannot be undone quickly | Potentially days if the database was also changed |
| Monitoring and alerts | Team learns about outages from customer complaints | Varies, but reputational cost is immediate |
Zero-downtime deployment means the site stays live while an update is being applied. Users never see a maintenance page. The new version replaces the old one in the background. This is not a luxury feature, it is a baseline expectation for any product launched after 2018.
Deployment is where under-built products often show their first structural problems. If the codebase has no test coverage, automated checks cannot run. If the application was not designed to run on multiple servers, it cannot scale under traffic. These problems are not deployment problems, they are development problems that deployment makes visible.
What ongoing costs appear after launch that founders forget to budget for?
This is the section most founders skip and then regret. The product going live is not the end of spending. It is the beginning of a different category of spending.
Hosting costs money every month. Third-party services like payment processors, mapping, SMS, and authentication charge ongoing fees. Security vulnerabilities get discovered in the software libraries your product depends on, and those vulnerabilities need to be patched within days of disclosure. Browser updates, operating system updates, and mobile OS updates sometimes break features that worked perfectly the day before. Users report bugs. Regulations change.
The industry standard estimate is that ongoing maintenance costs 15-20% of the original development cost per year. A product that cost $50,000 to build costs $7,500-$10,000 per year to maintain at a baseline level, before any new features are added.
| Post-launch cost category | Monthly range | What it covers |
|---|---|---|
| Hosting and infrastructure | $50-$800/month | Servers, storage, content delivery, backups |
| Third-party service fees | $100-$500/month | Payments, SMS, maps, authentication |
| Security patches and updates | $300-$800/month | Dependency updates, vulnerability fixes |
| Bug fixes | $500-$1,500/month | Issues found by users after launch |
| New features | $2,000-$6,000/month | Improvements based on user feedback |
Hosting costs vary widely based on architecture decisions made during development. A product built to run on a single always-on server costs roughly $0.50 per user per month because the server runs whether anyone is using the product or not. A product built to use computing power only when users are active costs roughly $0.05 per user per month. At 50,000 users, that is the difference between a $25,000 monthly server bill and a $2,500 one. Architecture decisions from month one compound for years.
Founders who build with a Western agency at $120,000 often have no budget left for maintenance. Founders who build with a global engineering team at $35,000-$45,000 retain enough runway to fund two years of ongoing development. The second group ships faster, iterates on user feedback, and reaches product-market fit before the first group finishes onboarding their agency.
How do I decide when to stop iterating and start scaling?
This question does not have a single answer, but it has a clear set of signals. Continuing to iterate when the signals point toward scaling wastes months. Scaling before the signals appear wastes much more.
Product-market fit is typically evidenced by a combination of retention, organic growth, and user desperation. If 40% or more of your users say they would be "very disappointed" if your product disappeared, the threshold Sean Ellis established in his 2010 research, you have product-market fit. If users are sharing the product without being asked, you have it. If your churn rate is dropping month over month, you likely have it.
The scale-or-iterate decision also depends on what is breaking. If the product is losing users because features are missing, iterating makes sense. If the product is losing users because the servers slow down when 500 people log in simultaneously, that is a scaling problem, not a product problem.
| Signal | Meaning | Action |
|---|---|---|
| Retention below 20% at 30 days | Users try it and leave | Iterate, the core value proposition needs work |
| Retention above 40% at 30 days | Users return and make it a habit | Prepare to scale |
| Organic referral rate above 15% | Users recommend it without prompting | Accelerate acquisition |
| Server response times increasing | Architecture is hitting its limits | Scale infrastructure |
| Support tickets tripling month over month | Product is being used but has gaps | Triage between bugs (fix now) and features (roadmap) |
A Paul Graham essay from 2013 put the threshold clearly: when growth starts compounding, scaling in any form, team, infrastructure, marketing, amplifies whatever is already working. Scaling a product that does not retain users amplifies the churn problem, not the growth.
For most startups, the window between early traction and needing to scale is three to six months. The founders who use that window to tighten retention rather than add features are the ones who scale efficiently.
What documentation should exist at the end of each phase?
Documentation is the phase deliverable that most agencies promise and few actually produce. Without it, every question about why the product works the way it does requires a call with the original developer, who may no longer be available.
By the end of discovery: a requirements document that lists every feature, every user type, every edge case, and every assumption. This document is the contract between the team and the client. Any feature not in the document is out of scope.
By the end of design: a complete set of approved mockups and a design system library. The mockups cover every screen, every state (empty, loading, error, success), and every breakpoint for mobile and desktop. The design system documents every reusable component.
By the end of development: a technical guide that explains how the codebase is organized, how to run the project locally, how the database is structured, and how to add a new feature without breaking existing ones. This document is what allows a new developer to contribute without a two-week onboarding call.
By the end of QA: a test report listing every test run, every bug found, the severity of each bug, and the resolution status. This report is evidence that the product was properly tested before launch, important if a client ever disputes whether a feature worked at handoff.
By the end of deployment: a runbook for common operational tasks, how to restart the server, how to roll back to a previous version, how to add capacity when traffic spikes, and who to call when something breaks at 2 AM.
A Stripe Engineering blog post from 2020 estimated that documentation reduces new developer onboarding time from three weeks to five days. For a startup that is growing its team or bringing development in-house, that gap is meaningful.
Timespade delivers all five documentation artifacts as part of every engagement. Every project hands off with requirements, design files, technical documentation, a QA report, and an operational runbook. The code belongs to the client from day one, and so does everything needed to understand it.
The full lifecycle from discovery to documented deployment takes 8-16 weeks depending on complexity. A Western agency charges $80,000-$150,000 for this scope. A global engineering team like Timespade delivers the same process for $25,000-$50,000, with the same phases, the same deliverables, and the same quality bar.
If you want to walk through the scope of your specific product and get a phase-by-phase estimate, the first conversation is free. Book a free discovery call
