Most founders who have been burned by a bad agency say the same thing in hindsight: the warning signs were there in week one. A vague proposal. A contract that buried IP rights in paragraph 14. A sales contact who disappeared after the invoice was paid. None of it was hidden. They just did not know what to look for.
This guide is the list they wish they had. It covers how to read a portfolio, what a proposal reveals before the project starts, which pricing models protect you and which expose you to runaway costs, how to assess technical quality without writing a line of code yourself, and what to do when an engagement goes wrong mid-stream.
What should I look for in an agency's past project portfolio?
A portfolio is evidence, not a gallery. The question is not whether the work looks good. The question is whether the problems they solved in those projects resemble the problem you are handing them now.
Start with domain complexity. An agency that has built seven marketing websites and one e-commerce store is not equipped to build a marketplace where buyers, sellers, and administrators all interact in real time. The underlying complexity is completely different. Look for at least two or three projects where the core challenge matches yours: user roles, payment flows, live data, third-party integrations, or whatever your product requires.
Then look at outcomes, not screenshots. Any agency can show you a polished mockup. Ask instead: is the product still live? Who runs it today? Did it hit its launch target? A Clutch survey from 2023 found that 40% of agency-built products are either abandoned or significantly rebuilt within 18 months of launch. That number drops sharply when the agency has a track record of projects still operating two or more years after handover.
Ask specifically whether the agency built any projects at the same scale you are targeting. A team that has shipped to 500 users may not have solved the architecture problems that appear at 50,000 users. Not because they are bad, because those problems require a different set of decisions at the start, and teams without that experience do not know to make them.
Finally, notice what is absent. Agencies doing serious technical work usually have a GitHub presence, documented case studies with specific metrics, and references you can actually contact. An agency whose entire portfolio is password-protected or undocumented is hiding something, usually that the client owns the outcome story and will not authorize public attribution.
How does the proposal process reveal an agency's working style?
You can learn more from how an agency writes a proposal than from anything they tell you on a discovery call.
A well-run agency sends a proposal that breaks the project into phases, names the deliverables in each phase, specifies who on their side is responsible for what, and lists the assumptions they made when scoping. This takes work to write. Agencies that do it anyway are signaling that they have a real process, one they follow on every project, not just on the ones where the client is demanding.
An agency that sends a one-page proposal with a lump-sum price and a vague description of "full development and delivery" is telling you something important. They either have not thought through your project carefully, or they are keeping the scope deliberately loose so they can charge for changes later.
Watch for two specific things. Does the proposal include a change order process? Every project will need changes. How those changes get scoped and priced is the mechanism that determines whether your $30,000 project becomes a $55,000 project six weeks in. A good agency names this process explicitly. A problematic one leaves it out, because the ambiguity benefits them, not you.
Does the proposal describe a communication cadence? Weekly updates, which tool they use for project tracking, who your point of contact is, how bugs get reported and resolved. This seems like housekeeping but it predicts the working relationship almost perfectly. According to a 2022 PMI report, poor communication is cited as the primary reason for project failure in 57% of cases. The proposal is the first sample of how this agency communicates. Read it that way.
What pricing models do agencies use and which protects the client?
There are three pricing models used across the software development industry. They are not equivalent.
Time-and-materials billing charges you for every hour worked, at a fixed hourly rate. It is the most common model at large agencies and the most dangerous for clients who do not have deep technical oversight. The agency's financial incentive under this model is more hours, not fewer. A McKinsey analysis of software project overruns found that time-and-materials projects run over budget 50% more often than fixed-scope projects. You absorb 100% of the cost of any scope creep, re-work, or slow team velocity.
Fixed-price contracts set a total cost for a defined scope. The risk shifts to the agency. If the project takes longer than they estimated, that is their cost, not yours. The catch is that the scope must be locked before the project begins. Any new feature request triggers a change order with its own price. Used correctly, this model gives you cost certainty and creates an incentive for the agency to estimate accurately.
Milestone-based contracts are a hybrid. You pay a set amount when each phase of the project is delivered and accepted. This model protects you in two ways: you never pay for work that has not been completed, and you have a natural exit point at each milestone if the relationship is not working. For most product builds, a milestone structure is the most client-friendly arrangement available.
| Pricing model | Who bears cost risk | Best for | Watch out for |
|---|---|---|---|
| Time and materials | Client | Ongoing maintenance, research sprints | Budget overruns, slow velocity |
| Fixed price | Agency | Well-defined MVPs, known scope | Rigid scope, heavy change order fees |
| Milestone-based | Shared | Phased product builds, first engagements | Poorly defined acceptance criteria |
| Retainer | Client | Long-term product iteration | Paying for capacity you do not use |
Western agencies typically bill time-and-materials at $150–$250 per hour. An AI-native agency working on a fixed or milestone basis for a comparable MVP comes in at $8,000–$12,000 total, versus $40,000–$80,000 billed by the hour at a US agency over the same build. The pricing model matters, but so does the underlying cost structure.
How do I evaluate an agency's technical competence without being technical?
You do not need to understand code to evaluate the quality of a technical team. You need to ask the right questions and listen for the answers that a confident, competent team gives without hesitation.
Ask: what technology stack will you use for this project, and why? A capable team will answer in plain English. They will name the tools they plan to use and explain the business reason, pages that load under two seconds, or never dependent on one server going down. A team that responds with jargon and acronyms without translation is either not used to working with non-technical clients or is obscuring the fact that they plan to use whatever technology they already know, regardless of whether it fits your product.
Ask: how do you handle testing? The right answer involves automated checks that run before any update goes live and a separate quality review stage. If the answer is "we test before we hand off to you," find another agency. That means your users are the last line of testing, which means your users find the bugs.
Ask: who specifically will build this project? Many agencies win clients through senior people on the sales call, then hand the build to junior staff elsewhere. Ask for the CVs or LinkedIn profiles of the engineers who will actually work on your project. Ask whether any part of the work will be subcontracted. A good agency answers without hesitation. A problematic one redirects to testimonials and case studies.
Stack Overflow's 2023 developer survey found that only 38% of senior developers worldwide are based in the US or Western Europe. The remaining 62% work globally. Technical competence is not geography. What matters is whether the team has shipped comparable products before and whether their process includes the safeguards that protect your investment.
What red flags appear during the first two weeks of an engagement?
The first two weeks of any software project are the highest signal period of the entire engagement. What happens in those weeks is almost always what will happen throughout.
The most reliable early warning sign is a communication vacuum. You signed the contract, you paid the deposit, and now the weekly update is late, incomplete, or generic. A team that is on top of their work sends short, specific updates: what got built, what is next, what is blocked. A team that is struggling, distracted, or understaffed sends nothing, or sends long process descriptions that contain no actual progress.
A second signal is a delayed or vague project plan. Within the first week, you should have a documented plan showing every phase, every deliverable, and every expected completion date. If the agency needs more than five business days to produce this after kickoff, they either have not started or do not have a real process. According to Project Management Institute data from 2023, projects that start without a documented plan are 2.5x more likely to miss their final deadline.
A third flag is design or scope drift without a change order. If the agency starts building things that were not in the agreed scope without raising a change order first, even things that seem helpful, that is a sign that the scope management process is informal. Informal scope management is how $30,000 projects become $60,000 projects.
Finally, watch for unexplained personnel changes. If the engineer you met during scoping has been replaced by someone you have never spoken to, ask why. Some turnover is normal; sudden, unexplained changes in the team assigned to your project are not.
How do contracts handle IP ownership and source code access?
This is the section most founders skip during contract review. It is the one that matters most after the project ends.
The default legal position in most jurisdictions is that the contractor, the agency, owns the intellectual property they create, unless the contract explicitly assigns that ownership to the client. That means if your contract does not say you own the code, you probably do not. You have a license to use the software, but the agency retains ownership.
The practical consequence of this is not theoretical. Founders who discover this problem typically discover it when they try to switch agencies. The new agency opens the codebase and finds a clause that restricts modification without the original agency's written consent. Or the original agency demands a licensing fee to transfer the repository. These situations happen more often than the industry admits.
Every software development contract should contain four things on intellectual property. The client owns all code, design files, and documentation produced during the engagement. Ownership transfers fully upon final payment. The agency retains no license to reuse the work in other client projects. The client has access to the code repository at all times during the project, not just at handover.
Repository access during the project is both a legal protection and a practical quality signal. An agency that gives you live access to the codebase is an agency that is not worried about you seeing how the work is progressing. An agency that refuses to grant access until the project is complete has something to hide or wants to retain leverage over handover.
| Contract clause | What it should say | Red flag version |
|---|---|---|
| IP ownership | All work product is assigned to the client upon creation | Client receives a perpetual license to use the software |
| Source code access | Client has read access to the repository throughout the project | Code delivered upon final payment |
| Third-party components | All open-source licenses are documented and compatible with commercial use | No mention of third-party components |
| Non-compete / non-solicitation | Reasonable 12-month restriction, narrowly defined | Overly broad restrictions that limit future hiring |
| Data ownership | All client data remains the property of the client | Agency may use anonymized data for product improvement |
What questions should I ask the agency's previous clients?
Reference calls are the most underused tool in the hiring process. Most founders ask one or two questions about quality and hang up. The questions that actually predict your experience are different.
Ask: were there any moments where you felt the agency was not being straightforward with you? This is the question that breaks through the polished reference call. Satisfied clients who had one difficult moment will often share it if asked directly. The answer tells you how the agency handles adversity, which is what you actually need to know.
Ask: how did the agency respond when something went wrong? Every project has at least one failure: a bug, a missed deadline, a misunderstood requirement. How the agency handled it tells you more about their character than anything that went right. A team that owns mistakes, fixes them without billing extra, and communicates clearly through the problem is a team worth hiring.
Ask: would you give them the same project again? Not "would you recommend them." The referral question is easy to say yes to out of politeness. "Would you give them the same project again" is a harder question that forces a real evaluation.
Ask: what would you change about how you managed the relationship? This inverts the blame lens. It tells you what the client wishes they had done differently, which often reveals what the agency should have caught or prevented. If three separate references say they wish they had locked the scope tighter, that is the agency's failure to manage the scoping process, not three founders making the same mistake independently.
A 2023 Gartner survey found that 71% of buyers consider peer references the most trusted source of information when evaluating technology vendors. Talk to at least three references. One good experience is anecdotal. Three is a pattern.
How does an agency's use of AI tooling affect my project cost?
In early 2024, AI-assisted development was new enough that most agencies had not integrated it into their standard workflow. A handful of forward-looking teams were using it consistently; the majority were not. This created a meaningful gap in speed and cost that has only widened since.
An agency that has built AI tooling into its development process ships faster because the portion of work that is repetitive, the standard infrastructure that every app needs, the common patterns that appear in every codebase, gets handled in hours instead of days. GitHub's research from 2023 found that developers using AI coding tools completed tasks 55% faster. On a $30,000 project, that gap is worth $15,000–$18,000 in hours that do not need to be billed.
The mechanism is straightforward. Every software project has a portion of code that is identical or near-identical to code in a thousand other projects: user authentication, form validation, database connections, standard navigation. In a traditional workflow, a developer writes that code from scratch or adapts it manually from a previous project. In an AI-assisted workflow, that draft exists in minutes. The developer's time goes to the logic that is unique to your product.
Asking about AI tooling is not about whether an agency is current. It is about whether their cost structure reflects a 2024 workflow or one that has not changed in a decade.
| Workflow type | Typical MVP cost | Typical timeline | AI in process? |
|---|---|---|---|
| Traditional Western agency | $40,000–$80,000 | 3–6 months | Rarely |
| Traditional offshore agency | $15,000–$30,000 | 2–4 months | Sometimes |
| AI-native agency (e.g., Timespade) | $8,000–$12,000 | 3–5 weeks | Yes, throughout |
When evaluating an agency in January 2024, ask not "do you use AI" but "show me where it fits in your sprint process." Ask them to describe the last project where AI tooling changed how long something took. A team that has genuinely integrated AI into their process can answer that question with a specific example in under two minutes.
When should I walk away from an agency mid-project?
Exiting a project mid-stream is expensive. You lose sunk cost, you face a handover gap, and you need to find a replacement team with incomplete context. None of that means you should stay when you should leave. Staying with the wrong agency is almost always more expensive than the disruption of switching.
The clearest signal that walking away is the right call is a pattern of unresolved issues. Not a single bad week. Not one missed update. A pattern: three or more consecutive weeks where the same problem recurs despite your raising it, or a fundamental disagreement about scope or quality that the agency refuses to address. A 2022 Standish Group report found that 31% of software projects are cancelled before completion. Most of those cancellations happen six to twelve weeks later than the data suggested they should have.
A second signal is a fundamental skills gap that only became apparent after the project started. Sometimes an agency presents well but lacks the specific experience your product requires. If you are three weeks in and the agency cannot show working code for the core feature your product depends on, that is not a planning problem. That is a capability problem.
Before exiting, make sure your contract gives you the right to do so cleanly. A milestone-based contract lets you stop at the end of any phase and take what has been delivered. A time-and-materials contract with no deliverable milestones may give the agency a claim to payment for work in progress that you cannot use. This is another reason why contract structure matters before a single line of code is written.
When you do decide to exit, do it in writing, request a full repository export and all associated credentials within 24 hours of notice, and document the state of deliverables at that moment. Agencies that are in the wrong often delay handover in hopes that the awkwardness prompts the client to renegotiate. The 24-hour request makes that tactic harder to execute.
Timespade has been brought in to rescue projects at this stage more than once. The pattern is consistent: a codebase that is partially built, documentation that is missing or inconsistent, and a client who waited two to three weeks longer than they should have before making the call. Getting a second opinion on a stalled project costs nothing. The discovery call is free, and if something is wrong, earlier is almost always better.
