Date: Oct 3, 2012 8:59 PM
Author: Joe
Subject: Big-O Notation Estimates

All,

Im having some trouble understanding Big-O estimates. Im using Discrete Mathematics And Its Applications by Rosen 6th Edition. In section 3.2 one of the example problems says to show that f(x) - x^2 + 2x + 1 is O(x^2).

The solution states the following:

++++++++++++++
"We observe that we can readily estimate the size of f(x) when x > 1 because x < x^2 and 1 < x^2 when x > 1. It follows that

0 <= x^2 + 2x + 1 <= x^2 + 2x^2 + x^2 = 4x^2

whenever x > 1.
...
Consequently, we can take C = 4, k = 1 as witnesses to show that f(x) is O(x^2). That is, f(x) = x^2 + 2x + 1 = 4x^2 whenever x > 1.
++++++++++++++

I have read this so many times and for some reason the concept is not sticking. Im hoping someone can explain how this problem was worked. Here is what I dont understand:

Why is "0<=" being used here?
How did we go from x^2 + 2x + 1 to x^2 + 2x^2 + x^2?

I thought maybe we are using the value of g of x (x^2) to replace the x values in the f(x) function but should this not be x^4 + 2x^2 + 1?

Any help would be appreciated.