this post was submitted on 29 May 2025
483 points (99.2% liked)
Science Memes
15221 readers
1205 users here now
Welcome to c/science_memes @ Mander.xyz!
A place for majestic STEMLORD peacocking, as well as memes about the realities of working in a lab.
Rules
- Don't throw mud. Behave like an intellectual and remember the human.
- Keep it rooted (on topic).
- No spam.
- Infographics welcome, get schooled.
This is a science community. We use the Dawkins definition of meme.
Research Committee
Other Mander Communities
Science and Research
Biology and Life Sciences
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- !reptiles and [email protected]
Physical Sciences
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
Humanities and Social Sciences
Practical and Applied Sciences
- !exercise-and [email protected]
- [email protected]
- !self [email protected]
- [email protected]
- [email protected]
- [email protected]
Memes
Miscellaneous
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
i see this all the time with software designed by americans. on an old job we used a tool called "officevibe" where you'd enter your current impression of your role and workplace once a month. you got some random questions to answer on a 10-degree scale.
when we were presented with the result the stats were terrible because the scale was weighted so that everything below 7 was counted as negative. we were all just answering 5 for "it's okay", 3-4 for "could use improvement", and 6-7 for "better than expected". there had never been a 10 in the stats, and the software took that as "this place sucks".
like, of course you downvote a bad response. you're supposed to help the model get better, right?
Recently, saw some survey that explicitly said 1-7 is "poor", 7-8 is "OK", and 9-10 is "great". Wild, not sure what the point of the scale is then.
Same with book ratings. Looking at StoryGraph, the average ratings I see is somewhere between 3.5 and 4.5. While I would rate a decent book a 3.
Born in Eastern Europe, live in the US, maybe that's why.
I wonder if it's like the grading system we use in school? <60% is F for fail, 60% to <70% is D which depending on the class can be barely passing or barely failing. >=70% would be A, B, and C grades which are all usually passing, and A in particular means doing extremely well or perfect (>=90%). I just noticed that that rating scale kind of lines up with the typical American grading scale, maybe that's just a coincidence
most countries i know mark <50% as a failing grade
i was unaware most countries still use this terrible score system at all
Apples and watermelons. The all-time highest major league batting average is only .371, nowhere near .500 which would correspond to 50% of the max possible.
i have no idea what that means or why it's relevant.
I believe you. On a rating scale of 0-10 a value of 5 doesn't usually represent a failure or anything negative, it's usually a middle concept such as "neither like nor dislike". Batting average is another example where 50% isn't a "failing grade". Hope that helps clear it up for you.
no i mean i don't know what a "batting average" is or why it's apples to oranges to compare it to test scores.
i'm assuming you mean that comparing a pure gaussian distribution to a weighted system is unproductive?