Most founders discover app store optimization the wrong way: they launch, watch installs plateau, and then scramble to fix it. The good news is that the ranking algorithm is more legible than Google search. You can move the needle within weeks with a few targeted changes, and those changes cost a fraction of paid acquisition.
But the metadata work only goes so far. Ratings, retention, and download velocity are the signals the algorithm trusts most, and those come from the product itself. The teams that rank consistently are the ones that treat ASO as an engineering decision, not a marketing afterthought.
How does the app store ranking algorithm decide what to show?
Both the Apple App Store and Google Play use a multi-signal ranking model. Neither has published exact weights, but the patterns from thousands of published ASO experiments make the priorities clear.
The algorithm is trying to answer one question: if we surface this app for this query, will the user download it, use it, and not immediately delete it? That shapes everything. An app that gets downloaded but churned fast ranks below one with fewer installs but strong retention, because the algorithm reads churn as the user saying "this wasn't what I expected."
Apple's App Store connects keyword rankings directly to search relevance in the title, subtitle, and keyword field. Google Play's algorithm leans more on the full description text and also factors in in-app behavior data from Android devices. The core inputs are the same: relevance, engagement, and quality signals. Sensor Tower's 2022 analysis of 100,000 apps found that search drives 65–70% of all app downloads. Paid ads and social referrals together account for less than a quarter. Getting organic search right is the highest-leverage move available to most founders.
Which metadata fields have the most impact on discoverability?
There are four fields that move rankings. Everything else is secondary.
The title carries the most weight in both stores. Apple indexes every word in the title for search. A title like "Lumio: Sleep Tracker" ranks for both "sleep tracker" and "sleep tracking app" with no further optimization needed. A title that is just a brand name like "Lumio" forces you to fight for ranking on every keyword separately. The 30-character title limit on iOS means every character matters. Include your primary keyword in the first 15 characters.
The subtitle on iOS (30 characters) and the short description on Android (80 characters) are the second most influential fields. They are indexed for search and they appear in search results, so they affect both ranking and click-through rate. Think of the subtitle as a second title: use your secondary keyword here, not a tagline.
The keyword field on iOS (100 characters, comma-separated) is invisible to users but fully indexed. Do not repeat words already in your title or subtitle; those are already indexed. Use the 100 characters for terms you cannot naturally fit into visible fields.
The long description on Google Play is the fourth lever. Google indexes the full text, so terms that appear two or three times in the description carry real weight. Apple does not index the long description for search, so treat it purely as a conversion tool there.
One data point worth keeping in mind: StoreMaven's 2022 research found that apps with a keyword in the title rank 10.3% higher on average for that keyword than apps relying on the keyword field alone. That gap compounds across every query you care about.
| Field | iOS Weight | Android Weight | Character Limit | Indexed? |
|---|---|---|---|---|
| Title | Highest | Highest | 30 (iOS) / 50 (Android) | Yes |
| Subtitle / Short description | High | High | 30 (iOS) / 80 (Android) | Yes |
| Keyword field (iOS only) | High | N/A | 100 | Yes |
| Long description | Low | High | 4,000 | iOS: No / Android: Yes |
| Developer name | Low | Low | , | Yes (partial) |
Do ratings and reviews actually move my ranking?
Yes, and the effect is larger than most founders expect. But volume alone is not the mechanism.
The algorithm looks at three distinct things: your average rating, the velocity of new reviews, and how recent they are. An app sitting at 4.8 stars with its last review from eight months ago gets less ranking credit than an app at 4.5 stars with 50 new reviews this month. Recency signals that the app is active and that users are still engaging with it. A static review count tells the algorithm that acquisition has stalled.
Apple confirmed in developer documentation that ratings and reviews are a formal ranking input. Google Play weights them in a similar way, with the added wrinkle that Android devices report in-app behavior back to the algorithm. On Android, the algorithm can see whether users actually open the app again after the first session, which adds a behavioral layer that review volume alone cannot fake.
The practical implication: ask for reviews at the right moment, not at launch. A prompt that fires after a user completes their first meaningful action, like finishing a task, making a booking, or hitting a milestone, converts at roughly 3x the rate of a generic "do you like this app?" prompt (Apptentive, 2022). The users who have just succeeded at something are the ones most likely to leave a positive review.
Apps with fewer than 100 ratings are penalized in category rankings on both stores. If you are pre-launch, getting to 100 ratings should be treated as a launch-week engineering task, not a long-term hope.
| Rating Signal | Why the Algorithm Cares | What You Should Do |
|---|---|---|
| Average star rating | Proxy for product quality | Prompt users after a success moment, not at random |
| Review velocity (new reviews/month) | Signals the app is active and growing | Build in-app review prompts into your update cycle |
| Review recency | Stale reviews suggest stalled growth | Respond to reviews, it increases new review rates by 12% (AppFollow, 2022) |
| Review sentiment | NLP on review text affects store category features | Monitor keywords in reviews and fix recurring complaints fast |
What role do download velocity and retention play?
Download velocity is the algorithm's most responsive signal. When your app gets a spike of installs over a short period, the stores treat it as evidence that something external is driving demand, and they reward it with higher placement in browse and category views. This is why a well-timed product launch or press mention can move rankings more than months of steady organic growth.
The catch is that velocity without retention cancels itself out. Both stores track what happens after the install. Apple's App Store uses opt-in analytics, but a meaningful fraction of users share app usage data. Google Play has broader behavioral visibility through Android. If users install your app and delete it within 48 hours, the algorithm reads that as a bad user experience and pulls your ranking down. SplitMetrics' 2022 data showed that apps with day-7 retention below 20% lose an average of 12 ranking positions within 30 days of a download spike, even when the spike was substantial.
Retention is the compounding variable here. An app that retains 40% of users at day 7 builds a growing base of active users who generate ongoing engagement signals. Review velocity goes up naturally. Session counts rise. The algorithm sees an active, healthy app and rewards it with sustained ranking improvement. That is the difference between a ranking bump that lasts a week and one that lasts a year.
From an engineering standpoint, retention is determined before launch. The architecture choices that make your app load quickly, work offline, and respond without lag are the same choices that keep day-7 retention above 30%. An app that loads in under two seconds retains 22% more users at day 7 than one that loads in four seconds (Google, 2022). Those users generate reviews, return sessions, and ranking signals that no amount of keyword optimization can replicate.
This is where the build team matters more than most founders realize. A team that ships an app with fast load times, stable performance, and clean offline behavior is also shipping an app that ranks. A team that cuts corners on performance is shipping an app that churns users and slides down the rankings regardless of how good the metadata is.
Timespade builds apps that load in under two seconds with infrastructure that costs roughly $0.05 per user per month to run. The reason this matters for ASO is direct: fast apps retain users, retained users leave reviews, reviews drive rankings, and rankings drive organic installs. Each one of those steps follows from the one before it. The engineering decisions made in the first 28 days of development set the ceiling on where the app can rank six months later.
If your current app is slow or unstable, no amount of keyword work will get you past apps that have solved the retention problem. Fix the product first, then optimize the store presence. The order of operations matters.
The founders who rank well long-term treat ASO as two parallel tracks running from day one: metadata optimization you can do today, and product quality that compounds over every release cycle. Neither track alone gets you to the top of a competitive category. Both together do.
