1593Views103Replies

Author Options:

The Instructables Rating System Answered

As you have all noticed, Instructables has recently implemented a new rating system, based on stars (insert 2001 A Space Odyssey joke here). We've done a bit of experimenting -- and reserve the right to do more in future -- but it's stable for now so here's how it works.

UPDATE 2010-09-24: The below calculation has been tweaked to bring Instructables' ratings closer to the level of ratings of comparable content on the Internet generally.  We found that we run about a star low, based on a rough look at a bunch of other sites and our (totally unbiased of course!) estimates of their content quality.  So, we have added a star to all ratings across the board.  It's like grade inflation... but we know our users put up really good stuff and we want that to be recognized.

UPDATE: we've tweaked the system a bit more. The below describes the rating system as of 2008-11-15.

The rating system uses a weighted average, where Instructables with only a few ratings, or ratings by users who have not rated many Instructables, have a total closer to the middle of the star range than the actual average of the ratings. For example, an Instructable with one five-star rating will have a total closer to 4 than to 5, because there are so few ratings on it. If the user who rated it hasn't rated other Instructables, the total will be lower than if the user who rated it has done a lot of rating in the past. The purpose of this difference is to minimize the effect of 'shill' ratings; we've seen some members attempt to bump their ratings by creating fake accounts just to rate up an Instructable or two!

As more people rate each Instructable, the total becomes closer and closer to the true average of the ratings. And as each user rates more and more Instructables, we weight their rating higher and higher. We have done it this way because we want the rating to reflect the opinion of the whole community, and we believe that members who do a lot of rating have been around enough to have a good sense of what makes a good Instructable. Also, members who comment and rate have their rating counted higher, as the comment indicates a higher level of involvement with the Instructable than rating alone.

This method makes it so a less-than-stellar Instructable can't be pushed to the very top of the ratings because the author got their friends to sign up and rate it at 5 stars; similarly a really good Instructable can't be dragged too far down by a malicious user.

For the mathy amongst you, this is the formula (we changed the way one component is calculated):

(siteAvgNumRates * siteAvgRating) + (numRates(ible) * avgRating(ible))
R(ible) = ----------------------------------------------------------------------
                         siteAvgNumRates + numRates(ible)
The avgRating(ible) is no longer a straight average, but a weighted one. It's calculated like this:
(W1*R1 + W2*R2 + ... + Wx*Rx) + 2(V1*C1 +V2* C2 + ... + Vy*Cy)
      --------------------------------------------------------------
               (W1 + W2 + ... + Wx) + 2(V1 + V2 + ... + Vy)

Pretty complicated algebra, huh? W and V are both weight values, between 0 and 1. R is the rating of someone who has not commented, and C is the rating of someone who has commented.

siteAvgNumRates and siteAvgRating are estimates, held constant in the pursuit of not having ALL ratings change every time one person rates one Instructable.

As you can see, when the number of ratings for a particular Instructable is low, the weight of the site averages count for much more of the rating value. When the rating count is high, the site averages are only a small part of the total rating value. We believe this algorithm has a good balance between letting the cream rise to the top, and preventing 'rating spam' to coin a term.

Please comment!

Discussions

As announced early this morning (23 Sep 2010), the calculation of ratings has been changed, to essentially bump all the values up by approximately one star. The details haven't yet been propagated to this forum topic.

0
user
rachel

10 years ago

@highvoltage2000, we don't currently show details of who rated what anywhere. But it's an interesting idea, what do people think? As an author, should we display this info? As a rater, should we display it?

As an author, I'm quite curious as to who rated my instructables, and how that would compare to their other ratings. As a rater, I've started letting authors know when I rate their projects for that reason. Since we won't ever know the site averages, we could at least see how people had ranked it. If you take away anonymity, it would also help people spot "rating spam", would it not? I think people would be more likely to rate more accurately if they knew people could see what they'd been up to.

Thanks, I appreciate the input! We are still undecided about whether to expose this info or not, so it's good to have people's opinions.

I would also be interested in Nacho's suggestion. I don't care about who rated, but the frequency distribution would have some useful information: nimodal, bimodal, flat, and so on...

. How about: xxx people rated it 1 star xxx people rated it 2 stars ... . and if you want to get real fancy, add the rating formula with the appropriate values filled in and highlighted. . Authors would get the info they want and the raters are still anonymous.

I think it would be interesting to see who gave it a rating at least (the people who hit + used to be displayed, after all).

It would also be interesting to see a histogram of the ratings - is the rating average because it's had a lot of averages, or because it's had a lot of low-ish ones and a few high ones?

I would suggest the same as Nacho, just make it visible for the authors to see how many people gave 1 start, 2star, etc... That way you won't get people spamming other people with insults like "why did you only gave my ible one star?" "hey thanks for rating" and that sort of stuff.

That's one of the plagues you have on for example DeviantArt. They can see who has favorited their artwork and start spamming everyone with "thanks for the fav". Alhough that's very friendly it also gets annoying after a while.

So yeah what Nacho said but only Authors please :)

I recently was qualified ,by Instructables, for a different contest. Why, I don't know. I think my 'able and others, were qualified but have NOTHING to do with the contest. Therefore, when people look at our 'ables, they vote very low on them and that hurts our chances in the real contest for which they were intended.
 If I try to remove my 'able, I am not allowed, because it is in a contest. Your methods leave something to be desired. In the future, persons would be wise to stay away from contests completely if the 'able doesn't even remotely resemble the new contest.

You have to enter stuff into contests. You might have forgotten doing it, but you ought to be the only person who would add your stuff into a contest.

L

  You're right, I entered it.. I wouldn't have but  Instructables said it Qualified, so I entered it. Now, I'm sorry I entered it. It is not a Valentine entry. It is a SEW  WARM entry. So, how do I un-enter it ?
  I think my 'able and others, were qualified but have NOTHING to do with the contest. Therefore, when people look at our 'ables, they vote very low on them and that hurts our chances in the real contest for which they were  intended. Persons would be wise to stay away from contests completely if their 'able doesn't  remotely resemble the new contest.

You are correct that people should think about entering competitions, but you are mis-reading the system to obtain "qualified".
The "Publish" page lists competitions which are currently open to entries, it does not inform you that any work has qualified, they are there as an option.
(Try asking member/fungus amungus/ nicely and he might be able to pull them out for you)

L

Just a check - did Fungus fix this for you - it looks that way to me?

L

I didn't want to bother him. I now know what qualified means. Simply put, it means it's within a qualified date..Maybe that should be how it is worded, to avoid confusion, but lesson learned.. Thanks L.

I looked and I think you've got fewer competition entries?

L

 "based on stars (insert 2001 A Space Odyssey joke here)."

Now that's funny :)

I also think that people don't have a clue how to vote. Nothing tells an onlooker that one should click on a "star" to vote. As for me, I made an avatar of what to do...Please have someone put in better voter instructions in a convenient area. Thanks

I think that's a good idea to see who's voting. Just like we can see our stats of what site is picking up our able. It could be in the same column. I had a pro tell me what was wrong with one of my ables and he was a bit uppity. His inference was one of comparing my cheap photo program compared to a more costly one. Not everyone can afford a $300 program. My point is, anyone, even a pro can wreck havoc on a persons efforts. If a voter aims to lower someone's rating, everyone should be able to see who is hacking away at a noob. My Instructable was a comical, tongue-in-cheek style of effort and wasn't supposed to be perfect by design. I was miffed.
 At this moment, my rating went down after being sky high and I'm wondering what the heck happened..

Hi,
I recently published an instructable "Make a LED light wand!"
My brother rated it (he is a member too). He rated it higher then the current rating and the rating went down!
Is that because the site thought that I had duplicate accounts (we both use the same internet connection, so it is probably the same IP) and was trying to rate my own?
You can see that he is not a shill, because he has been a member for years, although he has not published yet, and I have not.
Thanks,
ALPHA G33K

It's averaged, and weighted. If person number one rated it 5* it'd come out around 3* (they always do), if your brother gave it less than 5* it'd go down, as the average would be lower. If person number one rated less than 3* it'd come out around 3* (they always do due to the "site average factor"), but the weighting would move away from the site average with rating number 2. It's a bit odd to see, but it's down to the ratings only really getting meaningful when you have a lot.
More detail on ratings

L


Yes, I think the ratings should be shown. Like I mention in my comment below - it would help tell the difference between a low rating because raters aren't liking the instructable and a low rating because the author is new.

The rating system applies equally to all postings. The newness of a user only affects the weight their ratings carry.

L

Thanks for the clarification. That makes more sense than the way I was misunderstanding it. I still think the rating should be shown.

It might be of use, but if you got a lot it might get meaningless?

L

So if I rate tons of stuff, my ratings get worth more? BONUS!

I think I see how this all works (though there's so much other commenting in this I'm not sure if I missed something important). I guess I'd like to weigh in as a new user and say that this hurts those of us starting out. When I'm looking at other instructables I stick to the ones that are rated 4 stars and up. I think I posted a good instructable, but I also think that the relatively low rating is going to keep a lot of people from looking at it.
I don't entirely disagree with the system (it would be easy for me to up my rating by emailing a few friends) I just think that there should be a way to note in the rating that it isn't low because I got low ratings. For example, on mine it could say "3.19 stars - 2 ratings of 5 stars, 1 rating of 4 stars".

does this mean that, even if everybody gives an ible 5 stars, it still wont get that rating?

Precisely. Because it's averaged out, no ible can ever get exactly 5 stars.

Stinks, huh? :(

you commented here! omg ps i will be changing my username to Senor Tatertots soon

Yes, but it also means you can't have people game the system to give your I'ble a zero rating.

Hehe. And there you see the difference between a pessimist and an optimist very neatly illustrated. :P

OK, I'm still puzzled. I posted my first one and it shows a rating of 3.06, but in blue after it there is an "0.5 Worthless". It says "(1 rating)" so I'm really confused. Why doesn't it give you a way to see the ratings distribution?

The individual rating values are not accessible to users.

I've gathered some more data (chart). Most of the columns are rounded-off to the nearest 200. I don't really know about the anomaly around 3.00, but I'd guess it's in some way related to the siteaverage factor? L

stats.bmp

Woo, hoo! Thank you for the plot :-) I'm intrigued that you're using a left-handed coordinate system, but that's by the bye. Do you have information on the number of ratings for each entry? If so, could you make the 2D plot? I think that should reveal that roughly 1200 of the entries in the 3.00 bin have so few ratings that they're pegged at the sitel mean.

I'm having to click on things - I can't get the number of ratings for each entry. I used the left handed plot because this is the way sort by rating presents them, and the left-hand side is a nice curve. L

If I rate an unrated Instructable, it ends up with a rating fairly close to 3 regardless. It strikes me that giving unrated Instructables a rating less than 3 is counterproductive because it gives them a false rating, and that I shouldn't bother unless they're already rated? L

Well, somebody has to go first! If later ratings tend to agree with yours, the star count will adjust closer to what people actually think, away from the site average. But we can't let a lone rater have sole say on the star count, as that invites people to game the system. For people who have made a lot of ratings, the rating is 'heavier' compared to the site average. For people who commented, the rating is more weighty still. I don't know how much rating you have done, but the more you do, the more seriously the algorithm takes each one. So as you do more rating under the new system, you'll see your ratings have more of an effect.

I appreciate that this is weighted, but as some research I did the distribution of ratings (bottom) is heavily concentrated around 3-3.5 (about half of all Instructable ratings are in this band). The distribution seems to be far from "normal".

L

Your results are correct, and the conclusion you draw from them is not unreasonable. However, given the "new" weighted-average system, I think this analysis needs a second dimension. When you collected your data, did you keep track of both the average and the number of ratings? If so, I suspect that a 2D plot would show some non-trivial structure.

Hey, Rachel, is this data that you or Ed might have access to?

In particular, I would predict something like the following (forgive the crappy ASCII):
      5+           ___==      4+      __=====^^^      3+=======----. . .       2+      --======__      1+           ---==      0+----+----+----+----           10   100  1000

In other words, at low counts, you get a very narrow distribution right at the I'bles global average. At intermediate counts, you're still centered on the global average, but it's starting to spread. Then at very high counts, you start to see something more bimodal, with the truly crappy ones pushed down to low ratings, and the really good ones pushed high.

I predict that there won't be many in the middle at high counts, because I think people won't be bothering to submit lots and lots of "average" ratings.

>Blushes<

Look, Kelsey, you're an awesome guy and all that, but we're too different, and you live like 2000 miles away, and then there's the issue of the cats (all 39 of them) - it just wouldn't work.

Experience has taught me that someone will take this seriously. So, no, that was a joke. Come on. 39 cats?! Really?! I have ONLY 29 cats. Seriously. What kind of weirdo do you people think I am?!

I'm also married with a 10-month-old daughter :-) And I only have one cat.

And that, dear ladies and gentlemen, is how a true ~~stalker~~ detective finds out how many cats a person has. Watch and learn from the master. (You may want to take notes.)

By pulling out your girlish wiles and having eyes like glazed donuts? Hmmmm...not very "liberated" if you ask me :->

After your catcall, the cat must have got his tongue...

the preceding pun-mentary provided by the Coalition of People Who's Eyes Glaze Over When Reading Forum Topics That Foster Debate on Esoteric Mathematical Minucia and Derail the Current Train of Thought to Something More Trivial. That and two bucks will get you on the train.