Google has quietly started telling us how many reviews businesses have had removed. What does this look like for a chain everyone is talking about? LAP Coffee, a star venture with great growth (they seem to be popping up everywhere).
What if I told you their reviews might be a whole star too high, averaged across all locations?
I’m not a fan, which is exactly why I’m interested in what the data says — am I biased, or are the ratings?
The data
Distribution at a glance
Out of the 32 stores with reviews, we can see that 22 have the banner with a value. The most occurrences are for the 21 - 50 bucket, while the top bucket ends up with 4 stores.
How can we use it?
I decided to work backwards from the data we have and try to reconstruct a confidence window, providing us with the possible scores, instead of just assuming the worst case.
I chose three approaches, some more, some less optimistic:
- Best: bucket minimum count, all assumed to be 2★ — the most generous to the displayed number.
- Likely: bucket maximum count, an 80% 1★ / 20% 2★ mix (avg 1.2★).
- Worst: bucket maximum count, all assumed 1★ — the lower bound.
Formula:
adj_rating = (5·★5 + 4·★4 + 3·★3 + 2·★2 + 1·★1 + removed × assumed_avg) / (total + removed)
These are the stores I found that have the widest band:
Walking past Falckensteinstr. I might see a 4.9 — quite high. But the displayed rating isn’t even inside the confidence window (2.76–3.92): the dot sits a full star above the best case. KALLE HALLE tells the same story, with a band of 2.64 to 3.58.
At both, the removed-bucket range matches or exceeds the visible review count — half or more of every review ever left for these stores has been taken down.
Even in the most generous adjustment, most of these eight cases lose more than a star.
Aside: why are worst and likely so close together? The bucket size dwarfs the gap between the 1.0 and 1.2 star averages.
A concise list, grouped
Three buckets worth highlighting:
1. The 51 to 100 removed tier
Four stores. These have the largest absolute removal counts on Google’s banner.
| Store | City | Visible reviews | Displayed | Adj. likely | Adj. worst |
|---|---|---|---|---|---|
| Rosenthaler Str. 62 | Berlin | 781 | 4.7 | 4.29 | 4.26 |
| Klenzestraße | München | 310 | 4.4 | 3.66 | 3.61 |
| Krossener Str. | Berlin | 107 | 4.7 | 2.99 | 2.89 |
| KALLE HALLE | Berlin | 89 | 4.5 | 2.75 | 2.64 |
2. Smaller, newer-feeling stores with disproportionate removal share
Stores with fewer visible reviews show a much higher skew — as you’d expect mathematically. My speculation: newer stores get rougher early reviews, and request more takedowns to compensate.
Some of these stores have over a third of their reviews taken down — sometimes closer to 50%.
| Store | City | Visible reviews | Bucket | Removal share (max) | Displayed | Adj. likely |
|---|---|---|---|---|---|---|
| Falckensteinstr. (Training Center) | Berlin | 41 | 21–50 | 55% | 4.9 | 2.87 |
| Bergmannstraße | Berlin | 54 | 21–50 | 48% | 4.2 | 2.77 |
| Adalbertstraße | Berlin | 72 | 21–50 | 41% | 4.6 | 3.22 |
| Fraunhoferstr. | München | 79 | 21–50 | 39% | 4.5 | 3.19 |
| KALLE HALLE | Berlin | 89 | 51–100 | 53% | 4.5 | 2.75 |
| Eppendorfer Baum | Hamburg | 93 | 21–50 | 35% | 4.7 | 3.50 |
3. Stores with no banner (or only one removal)
As a baseline. These are the stores where the displayed rating is closest to the underlying truth.
| Store | City | Visible reviews | Displayed | Banner |
|---|---|---|---|---|
| Neue Promenade | Berlin | 26 | 3.9 | 1 review |
| Schloßstraße | Berlin | 16 | 4.8 | None |
| Olivaer Pl. | Berlin | 34 | 4.9 | None |
| München Hbf | München | 3 | 5.0 | None |
Chain-wide rollup
Averaged across all 32 stores with reviews:
| Scenario | Avg rating |
|---|---|
| Displayed | 4.61 |
| Adj. best (bucket min × 2★) | 4.25 |
| Adj. likely (bucket max × 80/20 mix) | 3.80 |
| Adj. worst (bucket max × 1★) | 3.75 |
Taking the likely column at face value, the chain’s effective rating sits around 3.8. This is almost a full star below the displayed average of 4.6.
Notice Neue Promenade from group 3: displayed 3.9, with only one review removed. That sits remarkably close to the chain-wide adjusted likely of 3.8 — almost as if the adjusted figure is what LAP’s reviews look like before the takedowns. The canary in the coal mine?
Conclusions
The data is hard to look past: LAP Coffee’s effective rating may be a full star below what’s displayed, once defamation removals are accounted for.
We can’t see the removed reviews, so we can’t judge whether the takedowns were warranted. But the gap between displayed and adjusted ratings is real either way.
Closing thoughts
The “golden rule” I was following before for Maps reviews was: anything between 4.1 and 4.6 in Berlin will give you a good outcome. I generally excluded the places above 4.6 as I didn’t trust the score. Will I trust it now? Cannot tell, the jury is still out.
Does this mean LAP Coffee is bad coffee? I honestly cannot say. I haven’t ever set foot in one — and people who know me will know I appreciate a good coffee.
The data presented here is for all of the German LAP Coffee locations. I wonder if I should actively visit the locations in Berlin (since that is where I am) and try them. Adding a little bit of live research? Might come back with a new post about that in time.
Methodology
I used Claude CoWork to pull the public Reviews tab on Google Maps for every active LAP location.
For each I recorded:
- the displayed rating and total review count,
- the per-star distribution (5★ through 1★) read from each bar’s ARIA label,
- the bucketed text of the defamation-removal banner if present.
Google does not expose exact removal counts to the public, so I work with buckets throughout.
Collected data and calculations can be accessed in this Google Sheet
Comments
Loading comments...