View Single Post
Old 06-17-2012, 08:02 PM   #6
pvaudio
Legend
 
Join Date: Jul 2009
Posts: 7,544
Default

Quote:
Originally Posted by Mdubb23 View Post
Purely out of my own curiosity, I was running through some math in my head this morning.

If someone has a 40% success rate at some activity, then attempts two more trials, the first of which is a failure and the second of which is a success, what is flawed in the following logic:

When the failure occurred, there were less total trials than there were when the success occurred, meaning the failure constituted a larger percentage of the total trials than did the success, and so the total average should decrease more than it should come back up?

I realize this is inherently false; the total average will always increase. If you are earning a 40% in a class and then score a 50% on a two-question quiz, your grade will always increase, no matter which order the questions were asked.

The most apt explanation I've been able to come up with is not to look at the 1/2 as a 50%, but instead to look at 0/1 and 1/1 separately, as 0% followed by 100%. Because the 0% is closer to the average (40%) than the 100% is, the 100% will have a greater effect on the average. In a set of data, an outlier will always effect the data more than a trial closer to the average, no matter which order the two trials were conducted.

But what makes that logic more valid than my earlier 'logic'?
My friend, you are simply over thinking fractions. sureshs has is correctly in that you are mixing probability with actuality. The probability of you getting something right may be 40%. That's your initial statement. You're now wondering what's going on because GIVEN two new results, your relative success rate has decreased. That is completely independent of your 40% probability. If you want to use the two new trials, then, as surehs said, you need to recalculate your probability. Otherwise, the two trials are meaningless with regards to your probability of success.

Let's use a fairly classic example of a coin toss (a Bernoulli trial where heads is the success with probability 50%). That 50% never changes no matter what. No matter how many times you toss that coin, it will always be just as likely to be heads as it is tails. Using this example, your paradox can be explained via the following two probabilities:

1. What is the probability of getting a heads on the 9th toss given you've gotten 8 tails previously? 1/2. All of the trials are independent, and so the probability of the heads has nothing to do with the probability of the prior 8 tails.

2. What is the probability of getting 8 tails and then getting a heads on the 9th toss? That is 9 * (1/2)^8 * (1/2) = 0.018, or about 2%. In this case, your tails have negatively affected what you're thinking to be your success rate when in reality, they have no effect whatsoever.

The first one is what's really going on, and the second is what's confusing you as being the same thing when it's completely different.
pvaudio is offline   Reply With Quote