Drexel dragonThe Math ForumDonate to the Math Forum

Ask Dr. Math - Questions and Answers from our Archives
_____________________________________________
Associated Topics || Dr. Math Home || Search Dr. Math
_____________________________________________

Mean and Variance of Distributions


Date: 12/07/2000 at 09:37:18
From: Vaughn Duff
Subject: Mean and Variance of Distributions

I'm stuck on the following problems. Any insight into deriving a 
solution would be greatly appreciated. Thank you!

I am trying to derive the expected value (mean) and variance for these 
distributions:

1) Exponential with parameter 1/k:

     f(w) = (k)*e^(-w*k) or maybe it should be f(w)=(1/k)*e^(-w/k)

The expected value is equal to:

     SUM[f(w)*w]     (discrete case)
or
     inf
     INT[f(w)*w dw]   (continuous case)
      0

The variance is equal to:

     SUM[f(w)*w^2 dw - (mean)^2]

The derivation should be equal to:

     E(w) = 1/k var(w)
          = 1/k^2

2) Hypergeometic distribution: define y = the number of S's in a 
random sample of size n (taken without replacement) from a population 
of N_s S's and N_f F's:

     p(y) = ((N_s C y)*(N_f C n-y))/(N C n)

The definition of the mean and variance are the same as in the first 
case. The derivation should be equal to:

     E(y) = n*(N_s/N) var(y)
          = n*(N_s/N)*(1-N_s/N)*((N-n)/(N-1))


Date: 12/07/2000 at 11:03:04
From: Doctor Anthony
Subject: Re: Mean and Variance of Distributions

For the first case, show that for a Poisson process the time interval 
between events has an exponential distribution.

If the average is k per unit time, then in t units of time the average 
would be kt.

The probability of no event P(0) in time t is therefore e^(-kt).

The probability P(T>t) = prob. no events in time 0 -> t = e^(-kt), so

     P(T<t) = 1 - e^(-kt)

This is the c.d.f. of the time to first event.

To get the p.d.f. we differentiate this and have:

     f(t) = ke^(-kt)

This is pdf of time interval between events, and is the exponential 
distribution.

            inf
     E(t) = INT[t.ke^(-kt).dt]
             0

and integrating by parts

          = t(-1/k)ke^(-kt) + INT[e^(-kt).dt]

          = 0  - (1/k)[e^(-kt)] from 0 to infinity

          = -(1/k)[0 - 1]

     E(t) = 1/k

To get variance we first find:

     E(t^2) = INT[t^2.(ke^(-kt).dt]

and integrating by parts (twice) gives 2/k^2 and so

     var(t) = 2/k^2 - 1/k^2

            =  1/k^2


For the second case, suppose we have a population of b black and g 
green elements. A sample of size r is taken (without replacement) and 
we wish to find the expected number of black elements in the sample.

We could calculate this by direct methods but a better way is to 
introduce a new variable x(k) which takes the value 1 or 0 depending 
on whether the kth element in the sample is or is not black.

By symmetry this probability for each value of k is b/(b+g)

The expected number of black elements for each k is:

     E[x(k)] = 1.b/(b+g) + 0.g/(b+g)
             = b/(b+g)

        E[b] = E[x(1)] + E[x(2)] + ... + E[x(r)]
 
and so

        E[b] = b/(b+g) + b/(b+g) + ... + b/(b+g)

to r terms giving:

        E[b] = rb/(b+g)


Variance
--------
To calculate the variance we continue as follows:

     E(x(k)^2) = 1^2.b/(b+g) + 0.g/(b+g)
               = b/(b+g)

     Var(x(k)) = b/(b+g) - b^2/(b+g)^2

                 b(b+g) - b^2 
               = ------------
                  (b+g)^2

                    bg
               =  -------
                  (b+g)^2

Now we must add together the means and variances of the individual 
results of the x(i)'s. To do this we first require the covariance of 
x(j).x(k)

Covariance
----------- 
x(j).x(k) = 1 if BOTH x(j) and x(k) = 1, otherwise x(j).x(k) = 0.

Probability x(j).x(k) = 1 is given by:

      b     b-1
     --- . -----
     b+g   b+g-1

and so

       E[x(j).x(k)] = b(b-1)/[(b+g)(b+g-1)]

     COV[x(j).x(k)] = E[x(j).x(k)] - E(x(j)).E(x(k))

                    = b(b-1)/[(b+g)(b+g-1)] - [b/(b+g)].[b/(b+g)]

                      b(b-1)(b+g) - b^2(b+g-1)
                    = ------------------------
                          (b+g)^2.(b+g-1)

                      b^2(b+g) - b(b+g) - b^2(b+g) + b^2
                    = ----------------------------------
                               (b+g)^2.(b+g-1)

                       -b^2 - bg + b^2
                    =  ---------------
                       (b+g)^2.(b+g-1)

                             -bg
                    =  ---------------
                       (b+g)^2.(b+g-1)

And so finally for a sample size r taken from the population of b+g we 
get:

                 rb
       E[S(r)] = ---
                 b+g

The covariance of x(j) with all other x(i) gives r-1 of these terms, 
so:

                   rbg          r-1 
     Var[S(r)] = -------  [1 - -----]
                 (b+g)^2       b+g-1

- Doctor Anthony, The Math Forum
  http://mathforum.org/dr.math/   
    
Associated Topics:
College Statistics

Search the Dr. Math Library:


Find items containing (put spaces between keywords):
 
Click only once for faster results:

[ Choose "whole words" when searching for a word like age.]

all keywords, in any order at least one, that exact phrase
parts of words whole words

Submit your own question to Dr. Math

[Privacy Policy] [Terms of Use]

_____________________________________
Math Forum Home || Math Library || Quick Reference || Math Forum Search
_____________________________________

Ask Dr. MathTM
© 1994-2013 The Math Forum
http://mathforum.org/dr.math/