Thirty products or three hundred: the catalog size debate comes up in almost every recommendation engine conversation. Founders with small catalogs assume the technology only works at Amazon scale. That assumption is wrong, but catalog size does change which approach works, and in some cases it changes whether you need a recommendation engine at all.
How does catalog size affect recommendation quality?
A recommendation engine's job is to predict what a user will want next. It does this by finding patterns: users who bought X also bought Y, users who clicked A usually scroll past B. The more products you have, the more patterns there are to find. That part is obvious.
What is less obvious is why a small catalog creates a different kind of problem. With 50 products, the engine can only recommend from 50 options. If a user has already seen most of them, the recommendations stop being useful fast. This is called catalog exhaustion, and it is the real bottleneck for small-catalog recommendation systems, not the math.
A 2023 study published in the ACM Recommender Systems proceedings found that recommendation diversity dropped by 34% when catalog size fell below 200 items. Below 50 items, the study found engines defaulted to popularity-based rankings 60% of the time because there was not enough variety to find meaningful individual preferences.
That said, catalog exhaustion only becomes a real problem when your users are repeat visitors with broad browsing histories. A founder selling 40 premium products to first-time buyers has a very different situation than a founder selling 40 products to subscribers who log in weekly. The former barely needs personalization. The latter has a real use case.
What recommendation approach works best with fewer products?
There are three broad approaches, and each one handles a small catalog differently.
Collaborative filtering looks at what groups of similar users have done and predicts what a new user will probably do. It requires enough users, not enough products. Spotify recommends from 100 million songs, but its underlying logic is "people who liked these 10 songs also liked this one." That logic works just as well with 40 products if you have enough users. Netflix's research team has published extensively on collaborative filtering at scale, but the same technique applied to a 60-item subscription box catalog works well with around 500 monthly active users as a floor.
Content-based filtering looks at the product itself: its attributes, category, price range, description. It recommends products that are similar to what the user already liked. A small catalog is not a barrier here. A founder with 30 artisan food products can recommend the smoked paprika hot sauce to someone who bought the chipotle vinaigrette because both are smoky, condiment-category, and under $15. The engine is reasoning about the products, not the population.
Hybrid systems combine both. Most production recommendation engines at companies like Etsy and Pinterest use hybrid approaches because pure collaborative filtering fails for new products (no purchase history yet) and pure content-based filtering fails for novel recommendations (it just shows you more of what you already have). For a small catalog, a lightweight hybrid tends to outperform either approach alone, and the infrastructure cost is lower than most founders expect.
A Baymard Institute study from 2022 found that relevant product recommendations increase average order value by 10–30% across e-commerce contexts. The catalog size in that study ranged from 25 to 50,000 SKUs. The lift was real across all catalog sizes, though it peaked between 500 and 5,000 items.
When does a small catalog make a recommendation engine pointless?
The honest answer: when users can see your whole catalog in one scroll.
If a founder has 15 products and a homepage that shows all of them, a recommendation engine adds no discovery value. The user has already seen everything. A recommendation that says "you might also like this" when the user has already seen the product twice just adds friction. In that case, a curated "frequently bought together" section built by hand will outperform any algorithm, costs nothing to run, and takes an afternoon to set up.
The threshold tends to be around 30 to 40 products. Below that, manual curation is almost always better. Above it, the math starts to pay off, especially if you have multiple product categories that do not obviously connect to each other.
A few other conditions make a recommendation engine a waste of time at any catalog size. If your users make one purchase and rarely return, there is no history to learn from. A recommendation engine needs repeat signals. One transaction tells the engine almost nothing useful about what to show next.
If your products are not related to each other at all, the engine cannot find meaningful patterns. A consulting firm selling three-hour strategy sessions, a half-day workshop, and a monthly retainer does not have a recommendation problem. It has a sales conversation problem.
If your average session time is under 90 seconds, personalization will not have time to influence the visit. The user is already gone before the recommendations load.
Is it expensive to run recommendations on a limited catalog?
No. This is one of the more durable misconceptions in the space.
A recommendation engine for a small catalog costs a fraction of what founders assume because the computational work scales with the number of products and active users, not with the ambition of the feature. A catalog of 80 products with 2,000 monthly active users needs almost no compute to run a recommendation layer. Amazon spends billions on recommendations because it has 350 million products and hundreds of millions of shoppers. That budget does not translate down to a startup context.
In practice, there are two cost components. There is the cost to build or integrate the system, and there is the ongoing cost to run it.
For integration, most founders do not need a custom-built engine. Existing tools like Recombee, Barilliance, or Nosto can plug into a product catalog in days. Monthly fees for a catalog under 200 items typically run $150–$500/month depending on traffic volume. A Western digital agency building a custom recommendation layer from scratch quotes $25,000–$60,000 for the same functional outcome. An AI-native team can build a lightweight, production-ready recommendation system tailored to a small catalog for $6,000–$10,000 with a four-week timeline, because the repetitive parts of that system: the data pipeline, the scoring logic, the API layer, get drafted by AI in hours rather than days.
| Approach | Upfront Cost (Western Agency) | Upfront Cost (AI-Native Team) | Monthly Running Cost | Best For |
|---|---|---|---|---|
| Third-party SaaS tool | $0 | $0 | $150–$500/mo | Founders who want speed, not customization |
| Custom lightweight engine | $25,000–$60,000 | $6,000–$10,000 | $50–$200/mo | Founders who need control over logic or data |
| Managed ML platform | $10,000–$20,000 setup | $3,000–$6,000 setup | $500–$2,000/mo | Larger catalogs or high-volume traffic |
The ongoing infrastructure cost is where the surprise often is: running a recommendation engine on a small catalog typically costs under $100/month in server costs because the product database is small enough to fit in fast memory and the scoring math runs in milliseconds. You only pay for compute when a user loads a page, not continuously.
A 2023 McKinsey report on AI adoption found that mid-market companies saw a median 15% revenue lift from personalization features, and the ones with the strongest ROI were not the largest companies. They were the ones with repeat customer bases and products that naturally clustered into preference groups.
For a small-catalog founder, the calculation is straightforward: if a recommendation engine increases average order value by 10% and you are doing $300,000/year in revenue, that is $30,000 in incremental sales against a $150–$500/month tool cost. The math is favorable before you even consider custom builds.
Timespade builds predictive AI systems, including recommendation engines, as one of its four core verticals. A custom recommendation layer built for a small catalog takes about four weeks and costs $6,000–$10,000. A Western agency charges $25,000–$60,000 for the same scope. The reason for the gap is not corners cut. It is that AI handles the repetitive engineering work that used to pad every invoice, while senior engineers focus on the decisions specific to your business: which signals matter, how to handle cold-start users, and when to override the algorithm with business rules.
If you want to understand whether your catalog size and user behavior actually justify a recommendation engine, the fastest way is a discovery call. We will tell you honestly if the math works for your situation. Book a free discovery call
