Star ratings are very broken, because everyone seems to think of the rating differently. IMO the criteria should be like this:
⭐️ ⭐️ ⭐️ ⭐️ ⭐️ = The best thing ever.
⭐️ ⭐️ ⭐️ ⭐️⚫️ = Above average.
⭐️ ⭐️ ⭐️ ⚫️ ⚫️ = It’s ok. Get’s the job done. Not great, not terrible. The usual. Nothing special. Totally average.
⭐️ ⭐️ ⚫️ ⚫️ ⚫️ = Below average. I’m disappointed.
⭐️ ⚫️ ⚫️ ⚫️ ⚫️ = Worst thing ever. Crime against humanity. Ban this product and burn the remaining stock immediately.
However, in reality people tend to use it like this.
⭐️ ⭐️ ⭐️ ⭐️ ⭐️ = It’s ok. I don’t have anything to complain about. Could be ok, could be great or anything in between.
⭐️ I’m not happy. Minor complaints, big complaints and anything in between.
When it comes to book reviews, the five star reviews tend to be useless. Especially when it comes to self help books, it seems like those reviews were made by people who are completely incapable of noticing any flaws in the book. I’m inclined to think those people shouldn’t even review a book if they can’t think of it critically. On Amazon there are always lots and lots of fake reviews produced in a click farm, but in other places you’ll also find genuinely incompetent reviewers too.
I think the problem is partially the fault of companies that insist, at least where rated interactions with employees is concerned, that every interaction should be five stars, which in a system where the stars are all meaningful like this, is simply not realistically possible. This gives people the sense in general that rating anything that you don’t completely dislike anything less than 5 stars is a bad thing to do, because it risks hurting some employee somewhere who doesn’t deserve it.
Tbh, that different understanding doesn’t matter much if you have enough reviews since it averages out.
If you compare two products with one review each, then yes, it hugely matters whether the one reviewer considered 5 stars as “expectations fullfilled” or “the best thing that happened to me ever”.
But if you got >1k reviews, both sides will get equal amounts of both reviewer groups and it will average out so that both products will be compareable based on their stars.
That’s a big misunderstanding many people have in regards to reviews. Many people are also afraid that unfair reviewers will skew the score. But since these unfair reviewers are usually spread equally over all products, it averages out as soon as you have a couple hundred reviews.
And that’s also what that article criticises. It’s much more important how many reviews an article has than the exact value. It’s easy to get a straight 5-star rating if you have only a single review. It’s much harder to do so if you have 10k reviews.
So the information value is: <100 or so reviews, the rating means little to nothing. >1000 reviews it can be usually trusted.
To be fair, I think it is reasonable to rate things you have no complaints about as high as possible. If I see a rating with three stars, I assume that it was okay with a few rough spots. I like the idea that all products start out as five stars unless there is something really wrong, and you start knocking points for problems.
Then how do you tell the difference between a good product and a great product? For example, imagine you’re comparing an electric screwdriver from Lidl (parkside) and Boch. Should they both get 5 stars because nothing is broken and they get the job done?
Star ratings are very broken, because everyone seems to think of the rating differently. IMO the criteria should be like this:
⭐️ ⭐️ ⭐️ ⭐️ ⭐️ = The best thing ever.
⭐️ ⭐️ ⭐️ ⭐️⚫️ = Above average.
⭐️ ⭐️ ⭐️ ⚫️ ⚫️ = It’s ok. Get’s the job done. Not great, not terrible. The usual. Nothing special. Totally average.
⭐️ ⭐️ ⚫️ ⚫️ ⚫️ = Below average. I’m disappointed.
⭐️ ⚫️ ⚫️ ⚫️ ⚫️ = Worst thing ever. Crime against humanity. Ban this product and burn the remaining stock immediately.
However, in reality people tend to use it like this.
⭐️ ⭐️ ⭐️ ⭐️ ⭐️ = It’s ok. I don’t have anything to complain about. Could be ok, could be great or anything in between.
⭐️ I’m not happy. Minor complaints, big complaints and anything in between.
When it comes to book reviews, the five star reviews tend to be useless. Especially when it comes to self help books, it seems like those reviews were made by people who are completely incapable of noticing any flaws in the book. I’m inclined to think those people shouldn’t even review a book if they can’t think of it critically. On Amazon there are always lots and lots of fake reviews produced in a click farm, but in other places you’ll also find genuinely incompetent reviewers too.
My favorite is the one star review when it’s a shipping issue. “Look at this, the box is ripped!”
LOL, if only you could rate the shipping company… those reviews could be fun to read.
Even better: the one-star review on the pre-order page complaining that it’s not out yet!
Five stars also means “this was just delivered to me and I haven’t even used it yet”.
Oh, totally forgot that one. The box exists -> instant 5 stars.
I think the problem is partially the fault of companies that insist, at least where rated interactions with employees is concerned, that every interaction should be five stars, which in a system where the stars are all meaningful like this, is simply not realistically possible. This gives people the sense in general that rating anything that you don’t completely dislike anything less than 5 stars is a bad thing to do, because it risks hurting some employee somewhere who doesn’t deserve it.
Tbh, that different understanding doesn’t matter much if you have enough reviews since it averages out.
If you compare two products with one review each, then yes, it hugely matters whether the one reviewer considered 5 stars as “expectations fullfilled” or “the best thing that happened to me ever”.
But if you got >1k reviews, both sides will get equal amounts of both reviewer groups and it will average out so that both products will be compareable based on their stars.
That’s a big misunderstanding many people have in regards to reviews. Many people are also afraid that unfair reviewers will skew the score. But since these unfair reviewers are usually spread equally over all products, it averages out as soon as you have a couple hundred reviews.
And that’s also what that article criticises. It’s much more important how many reviews an article has than the exact value. It’s easy to get a straight 5-star rating if you have only a single review. It’s much harder to do so if you have 10k reviews.
So the information value is: <100 or so reviews, the rating means little to nothing. >1000 reviews it can be usually trusted.
To be fair, I think it is reasonable to rate things you have no complaints about as high as possible. If I see a rating with three stars, I assume that it was okay with a few rough spots. I like the idea that all products start out as five stars unless there is something really wrong, and you start knocking points for problems.
Then how do you tell the difference between a good product and a great product? For example, imagine you’re comparing an electric screwdriver from Lidl (parkside) and Boch. Should they both get 5 stars because nothing is broken and they get the job done?
Lidl screwdriver = adequate for occasional DIY, 3 Stars
Bosch screwdriver = can be used as a substitute chisel or crowbar every day for a lifetime, 5 stars
What it’s really useful for, however, is reviews.
People don’t buy 4-star reviews, and angry people don’t give them. Those middle 3 stars are the reviews with reading.
Can confirm. Especially when reading book reviews, I always skip the 5 star renews and focus on reading the others.