AP Statistics Curriculum 2007 Prob Basics
From Socr
(→A brief History) 

Line 5:  Line 5:  
The next substantial contributions to probability theory was a [http://wiki.socr.umich.edu/images/b/bc/Probability_Expectation_Formulation_Pascal_Fermat_1654.pdf letter exchange] by [https://en.wikipedia.org/wiki/Blaise_Pascal Blaise Pascal] (16231662) and [https://en.wikipedia.org/wiki/Pierre_de_Fermat Pierre de Fermat] (16011665). Pascal and Fermat discussed a gambling problem proposed in 1654 by [https://fr.wikipedia.org/wiki/Antoine_Gombaud,_chevalier_de_M%C3%A9r%C3%A9 Chevalier de Mere], which examined the fundamentals of probability theory. The key question related to the number of experiments required to ensure obtaining a total sum of 6 when rolling two fair hexagonal dice. The letter correspondence between Pascal and Fermat led to the formulation of probability and expectation.  The next substantial contributions to probability theory was a [http://wiki.socr.umich.edu/images/b/bc/Probability_Expectation_Formulation_Pascal_Fermat_1654.pdf letter exchange] by [https://en.wikipedia.org/wiki/Blaise_Pascal Blaise Pascal] (16231662) and [https://en.wikipedia.org/wiki/Pierre_de_Fermat Pierre de Fermat] (16011665). Pascal and Fermat discussed a gambling problem proposed in 1654 by [https://fr.wikipedia.org/wiki/Antoine_Gombaud,_chevalier_de_M%C3%A9r%C3%A9 Chevalier de Mere], which examined the fundamentals of probability theory. The key question related to the number of experiments required to ensure obtaining a total sum of 6 when rolling two fair hexagonal dice. The letter correspondence between Pascal and Fermat led to the formulation of probability and expectation.  
+  
+  The most popular modern probability theory foundation (as there are many alternative axiomatic definitions) was laid out by [https://en.wikipedia.org/wiki/Andrey_Nikolaevich_Kolmogorov Andrey Kolmogorov], who defined the notion of ''sample space'' and proposed an axiomatic system for probability theory, and [https://en.wikipedia.org/wiki/Richard_von_Mises Richard von Mises], who tied in measure theory.  
=== Fundamentals of Probability Theory===  === Fundamentals of Probability Theory=== 
Current revision as of 13:14, 13 April 2018
Contents

General AdvancePlacement (AP) Statistics Curriculum  Fundamentals of Probability Theory
A brief History
Currently, there is no evidence that the ancient Greeks or the Romans had any interests or pursued the notion of predicting future likelihoods, probabilities, outcome chances, or odds. The first recorded attempts to develop a theory describing future events was driven by examining gambling games during hte early Renaissance. In 1550, an Italian polymath Gerolamo Cardano (15011576) wrote a paper addressing the chance of certain outcomes in rolls of dice, which presented the first definition of probability. This manuscript was only in 1576 and then printed in 1663.
The next substantial contributions to probability theory was a letter exchange by Blaise Pascal (16231662) and Pierre de Fermat (16011665). Pascal and Fermat discussed a gambling problem proposed in 1654 by Chevalier de Mere, which examined the fundamentals of probability theory. The key question related to the number of experiments required to ensure obtaining a total sum of 6 when rolling two fair hexagonal dice. The letter correspondence between Pascal and Fermat led to the formulation of probability and expectation.
The most popular modern probability theory foundation (as there are many alternative axiomatic definitions) was laid out by Andrey Kolmogorov, who defined the notion of sample space and proposed an axiomatic system for probability theory, and Richard von Mises, who tied in measure theory.
Fundamentals of Probability Theory
Probability theory plays a role in all studies of natural processes across scientific disciplines. The need for a theoretical probabilistic foundation is obvious since natural variation affects all measurements, observations and findings about different phenomena. Probability theory provides the basic techniques for statistical inference.
Random Sampling
A simple random sample of n items is a sample in which every member of the population has an equal chance of being selected and the members of the sample are chosen independently.
 Example: Consider a class of students as the population under study. If we select a sample of size 5, each possible sample of size 5 must have the same chance of being selected. When a sample is chosen randomly, the process of selection that is random. How could we select five members from this class randomly? Random sampling from finite (or countable) populations is welldefined. On the contrary, random sampling of uncountable populations is only allowed under the Axiom of Choice.
 Random Number Generation using SOCR: You can use SOCR Modeler to construct random samples of any size from a large number of distribution families.
 Questions:
 How would you go about randomly selecting five students from a class of 100?
 How likely the sample is to represent of the population? The sample won’t exactly resemble the population as there will be some chance of variation. This discrepancy is called chance error due to sampling.
 Definition: Sampling bias is nonrandom and refers to some members having a tendency to be selected more readily than others. When the sample is biased the statistics turn out to be poor estimates.
Handson activities
 Monty Hall (ThreeDoor) Problem: Go to SOCR Games and select the Monty Hall Game. Click in the Information button to get the instructions on using the applet. Run the game 10 times with one of two strategies:
 Stayhome strategy  choose one card first, as the computer reveals one of the donkey cards, you always stay with the card you originally chose.
 Swap strategy  choose one card first, as the computer reveals one of the donkey cards, you always swap your original guess and go with the third facedown card!
 You can try the Monty Hall Experiment as well. There you can run a very large number of trials automatically and observe the outcomes empirically. Notice that your chance to win doubles if you use the swapstrategy. Why is that?
 See the SOCR Monty Hall Activity.
 See the conditional probability derivation of the exact chance of success.
Law of Large Numbers
When studying the behavior of coin tosses, the law of large numbers implies that the relative proportion (relative frequency) of headstotails in a coin toss experiment becomes more and more stable as the number of tosses increases. This regards to the relative frequencies, not absolute counts of heads and tails.
 There are two widely held misconceptions about the law of large numbers relating to coin tosses:
 Differences between the actual numbers of heads and tails become more variable with increase of the number of tosses – a sequence of 10 heads doesn’t increase the chances of selecting a tail on the next trial.
 Coin toss results are independent and fair, and the outcome behavior is unpredictable.
Types of probabilities
Probability models have two essential components: sample space and probabilities.
 Sample space (S) for a random experiment is the set of all possible outcomes of the experiment.
 An event is a collection of outcomes.
 An event occurs if any outcome making up that event occurs.
 Probabilities for each event in the sample space.
Where do the outcomes and the probabilities come from?
 Probabilities may come from models – say mathematical/physical description of the sample space and the chance of each event. Construct a fair die tossing game.
 Probabilities may be derived from data – data observations determine our probability distribution. Say we toss a coin 100 times and the observed Head/Tail counts are used as probabilities.
 Subjective Probabilities – combining data and psychological factors to design a reasonable probability table (e.g., gambling, stock market).
Event Manipulations
Just like we develop rules for numeric arithmetic, we like to use certain eventmanipulation rules (eventarithmetic).
 Complement: The complement of an event A, denoted A^{c} or A', occurs if and only if A does not occur.
 , read "A or B", contains all outcomes in A or B (or both).
 , read "A and B", contains all outcomes which are in both A and B.
 Draw Venn diagram pictures of these composite events.
 Mutually exclusive events cannot occur at the same time ().
Axioms of probability
 First axiom: The probability of an event is a nonnegative real number: , , where S is the sample space.
 Second axiom: This is the assumption of unit measure: the probability that some elementary event in the entire sample space will occur is 1. More specifically, there are no elementary events outside the sample space: P(S) = 1. This is often overlooked in some mistaken probability calculations if you cannot precisely define the whole sample space, then the probability of any subset cannot be defined either.
 Third axiom: This is the assumption of additivity: Any countable sequence of pairwise disjoint events E_{1},E_{2},... satisfies
 Note: For a finite sample space, a sequence of number {} is a probability distribution for a sample space S = {}, if the probability of the outcome s_{k}, p(s_{k}) = p_{k}, for each , all and .
Birthday Paradox
The Birthday Paradox Experiment provides an interesting illustration of some of the fundamental probability concepts.
In a random group of N people, what is the probability, P, that at least two people have the same birthday?
 Example, if N=23, P>0.5. Main confusion arises from the fact that in real life we rarely meet people having the same birthday as us, and we meet more than 23 people.
The reason for such high probability is that any of the 23 people can compare their birthday with any other one, not just you comparing your birthday to anybody else’s.
There are NChoose2 = 20*19/2 ways to select a pair of people from a group of 20. Assume there are 365 days in a year, P(oneparticularpairsameBday)=1/365, and P(oneparticularpairfailure)=11/365 ~ 0.99726.
For N=20, 20Choose2 = 190. E={No 2 people have the same birthday is the event all 190 pairs fail (have different birthdays)}, then P(E) = P(failure)^{190} = 0.99726^{190} = 0.59. Hence, P(atleastonesuccess)=10.59=0.41, quite high. Note: for N=42, P>0.9.
This is an approximate solution to the Birthday problem. You can also see the exact solution in the Birthday Paradox Activity.
Examples
Elementary Probability
Each of the three boxes below contains two types of balls (Red and Green). Box 1 has 4 Red and 3 Green balls, box 2 has 3 Red and 2 Green balls, and box 3 has 2 Red and 1 Green balls. All balls are identical except for their labels. Which of the three boxes do you most likely to draw a Red ball from? In other words, if a randomly drawn ball is known to be Red, which box is the one that we most likely draw the ball out of?
 Exact solution: The probabilities of drawing a Red ball out of each of the 3 boxes are 4/7, 3/5 and 2/3, respectively. The smallest common denominator of the prime numbers 7, 5 and 3 is their product 105. Thus, these probabilities may also be expressed as 60/105, 63/105 and 70/105, respectively. Clearly, the highest chance of drawing a Red ball is associated with box 3, despite the fact that this box has the smallest number of Red balls.
 Empirical solution: This problem may also be explored experimentally using the SOCR BallandUrn Experiment (see this activity). Each of the 3 figures below illustrates this experiment, where we sample (with replacement) 100 balls from each box, respectively.
 Box 1: To empirically test the chance of drawing a Red ball from box 1, we set N=7 (total number of balls), R=4 (number of red balls in the population, and n=100 (number of balls we sample, i.e., number of experiments we do). The result of these 100 random draws will vary each time. However one such experiment generated 62 Red balls out of the 100 draws (see image below and the value of the Y=62 variable in the summary table).
 Box 2: Now we set N=5 (total number of balls), R=3 (number of red balls in the population, and n=100 (number of balls we sample, i.e., number of experiments we do). Again, the result of these 100 random draws will vary each time. However one such experiment generated 64 Red balls out of the 100 draws (see image below and the value of the variable Y=64 in the summary table below the graph).
 Box 3: Finally, we can set N=3 (total number of balls), R=2 (number of red balls in the population, and n=100 (number of balls we sample, i.e., number of experiments we do). Again, the result of these 100 random draws will vary each time. However one such experiment generated 71 Red balls out of the 100 draws (see image below and the value of the variable Y=71 in the summary table below the graph).
 Box 1: To empirically test the chance of drawing a Red ball from box 1, we set N=7 (total number of balls), R=4 (number of red balls in the population, and n=100 (number of balls we sample, i.e., number of experiments we do). The result of these 100 random draws will vary each time. However one such experiment generated 62 Red balls out of the 100 draws (see image below and the value of the Y=62 variable in the summary table).
From these empirical tests, we can propose that a Red ball is most likely to be drawn from Box 3. Again, remember that these simulations will produce different results each time we do the experiments. However, the expected means (expected number of Red balls in sample of 100), which are reported in the bottomright tables (for each box setting), indicate this conclusion more reliably. These expected Red ball counts for the 3 boxes are 57.14, 60.0 and 66.67, respectively.
SOCR Empirical Probabilities
For each of these handson interactive experiments discuss the sample space, probabilities, events of interest, event operations, and how to compute theoretically and empirically the probabilities of these events.
 Buffon Coin Experiment
 Buffon Needle Experiment
 Poker Dice Experiment
 Poker Experiment
 Spinner Experiment
 Other experiments from this collection.
Problems
References
 SOCR Home page: http://www.socr.ucla.edu
Translate this page: