Math 110.107, Calculus II (Biological and Social Sciences)
Fall 2010 Course Lecture Synopses
Week 13: November 22
http://www.mathematics.jhu.edu/brown/courses/f10/107.htm
|
MWF 10:00am - 10:50am Krieger 205 |
|||
|
|
|||
|
403 Krieger Hall |
|
||
|
410-516-8179 |
|
||
|
Office Hours:
|
M |
1:00-2:00 pm |
by appt. other times |
|
W |
1:00-2:00 pm |
||
|
Below is some basic information pertaining to the lectures of this course. I will update this page after each lecture or two to benefit both the students in their attempts to organize the material for the course, and the TAs so that they know what material I covered and how it was covered. Please direct any comments about this page to me at the above contact information. |
· Monday,
November 22: Today, I finished with Section 12.4 and discrete
random variables. I defined the variance
of a distribution, and related that to the mean. In detail, I noted that the mean or expected
value of a random
variable
is a measure of
the central tendency of the distribution.
But two or more random variables can have the same mean, and yet be
wildly different as distributions. Hence
more information is needed to discern two random variables. The variance of
, defined as
is the average
of the squared differences of each value of
from the mean
(again weighted
by their probabilities). It is a measure
of dispersion and gives information on how spread out the distribution values
are. Really there is little more I can
say about this since we need to get to continuous distributions now. The rest of the section is good stuff, but we
will not be devoting time to it. I then
defined a continuous random variable, and spent much time focusing on how one
goes from a discrete random variable to the continuous case. By relying on the cumulative distribution
function, one can see how to generalize.
Take a sample population and measure their height only in feet. One can use the total number in each foot
category over the total number in the population to get a probability
distribution. If one were then to repeat
the measurements, but this time allowing for each inch category between feet,
the probabilities would be much smaller since there are more boxes. And as the accuracy of the measurement became
perfect, the number of people in each vanishingly small bracket would go to
0. However, if we increased the sample
size at each time we increased the measurement accuracy, we could maintain a
probability mass function that is reasonably the same as that of the original
set of measurements, but with lots more bars in the bar chart. Passing to infinite accuracy (and infinite
population size), we would get a continuous distribution with a continuous probability
mass function. A continuous probability
mass functions is called a probability density function. To see this differently, take any continuous cumulative distribution
function
. It can be
written as the anti-derivative of another function
as long as
satisfies
certain characteristics: it must have
, (reall, also
also) and be
always non-negative. Then this kind of
is always a
probability density function. Notice
that in this continuous set up, we still get
as the
cumulative area under the curve
from
to
, just as in the discrete case. But here we also get
. What this
means is that with infinite precision, it is impossible to get a perfect
measurement. Hence it makes more sense
to talk about a measurement in an interval, like
or
. Indeed, with
integrals,
and
. Hence
. We will
continue next time.