In CUriosity, experts across the CU Boulder campus answer pressing questions about humans, our planet and the universe beyond.
As the holiday shopping season ramps up, Nicholas Reinholtz, assistant professor of marketing in the Leeds School of Business, delves into the question: “Should you trust 5-star reviews?”
There’s no understating the influence of online reviews on consumers’ purchase decisions. Nine out of 10 say they consider reviews before making a purchase, and 45% simply won’t purchase a product if it has no reviews available, according to from consumer research firm PowerReviews.
But there are limitations to relying solely on user ratings and online reviews to evaluate product quality, according to Nicholas Reinholtz, who including , product and price search.
Reinholtz sat down with CU Boulder Today to discuss how consumer expectations and other factors affect ratings and how biases in rating systems can lead to inaccurate assessments—and potentially bad purchases.
Previously in CUriosity
Should consumers be cautious and avoid getting too swept up in reviews?
My co-author Matt Meister, a former Leeds doctoral student and current assistant professor of marketing at the University of San Francisco, and I have looked at Airbnb ratings, and we have a second paper that looks at ratings from REI. One thing both papers have in common is this idea that expectations can influence ratings.
So if you go on to Amazon and buy a $500 pair of headphones, and if there are any problems at all, you give it one star. You say, “I can't believe I paid $500 for a pair of headphones and there is a crackle.”Unacceptable, right? Whereas if you paid $5 for the headphones, you give it five stars because they work.
That expectation should influence ratings does make sense. You can't have a five-point scale that encompasses the entire spectrum of human experience, right? Ratings are relative to the expectations you have going into the product purchase. There are multiple issues with that, and one of them is that when people are looking at products, they don't account for the fact that ratings reflect expectations.
With our research on Airbnb ratings, the point that we're trying to make is that it's totally fine and reasonable that people would give ratings that reflect their expectations. But it's problematic if future consumers don't recognize the role of those expectations and adjust for them accordingly.
Airbnb has this status symbol where they label certain hosts “superhosts.”We look at Airbnbs that are superhosts in some time periods and not superhosts in other time periods, and we find that they get better ratings during the periods where they're not labeled a superhost. So presumably people are going into the experience saying, “Oh, I'm staying at a superhost, and so the same experience is rated slightly worse against those expectations.”
Are star ratings meaningless?
We should have a mantra: When you're on Amazon, more stars doesn't mean better. I don't think star ratings are useless because they can, particularly coupled with text reviews, identify truly problematic things, like if something gets a terrible rating.
I think if you're using ratings to compare, say, a product that looks better, but it's only 4.7 stars, whereas another similar product maybe looks a little bit worse, but it's 4.9 stars. Those are the types of situations where I think we really need to exert caution for a variety of reasons instead of just blindly following the ratings.
If we rate experiences, it's really hard to disentangle contextual influences from intrinsic ones.
For example, we looked at ratings for winter jackets on REI and merged those ratings with weather data. It turns out that people rate winter jackets better on warmer days and worse on colder days. The reason we think that happens is that you go outside on a super cold day and you're cold, and when you rate the jacket you're wearing, you're like, “Well, I'm cold, so this jacket must not be that great.”Whereas you go out on a warm day, and the jacket feels great, right? It's perfect. You're totally warm.
What’s something surprising you’ve found in your consumer ratings research?
The thing that surprised me the most is how uncritically people accept reviews as a measure of quality. We had a thought experiment related to headphones. We asked study participants to imagine they are looking at two pairs of headphones online. One is a $500 pair of headphones that has a 4.6 rating. The other is a $5 pair of headphones with a 4.8 rating. We asked: Which of these two pairs of headphones do you think are higher quality? We were convinced that everyone would point to the $500 pair. It turns out only about 50% of people did. The other half endorsed the idea that the $5 headphones were higher quality.
As a researcher and expert on the topic, how do you personally use reviews?
We always like to think of ourselves as more savvy. There's a powerful draw of reviews, and I still catch myself looking at them and being like, “I think I'll like it, but, you know, it's a 4.7. Maybe there's something wrong with it.”I was buying carabiners the other day, and I found myself looking for higher-rated carabiners—4.7 versus 4.9. And then I had to be like, “Come on, don’t do this.”
It’s a tough world out there for a consumer. And you don't have many people whose incentives are aligned with yours. These days I find myself gravitating more and more to brands, which is something I didn't do as a younger person, because I feel like you can build trust in brand quality, unlike picking a product on Amazon whose name you’ve never heard of and sounds like alphabet soup.
CU Boulder Today regularly publishes Q&As with our babyֱapp members weighing in on news topics through the lens of their scholarly expertise and research/creative work. The responses here reflect the knowledge and interpretations of the expert and should not be considered the university position on the issue. All publication content is subject to edits for clarity, brevity and university style guidelines.