The Math Forum

Ask Dr. Math - Questions and Answers from our Archives
Associated Topics || Dr. Math Home || Search Dr. Math

Set Terminology

Date: 07/07/97 at 12:18:05
From: Bob Howell
Subject: Some more math terminology help, please

So far your help has been very valuable in understanding the math 
symbols for the textbook I am studying. I am stuck again, so your help 
is needed.

Ratio RA(I) is defined as:  RA(I) = ------

That is, the ratio of the solution found by an approximation algorithm 
A(I) to the optimal solution OPT(I) for instance I of a minimization 
For a maximization problem:  RA(I) = ------

So, clearly RA(I) >= 1, and the closer to 1, the better the 
approximation algorithm. This I understand. Now, here is what I need 
explained. What does the terminology inf{...} mean in the following 
definition of the absolute performance ratio (which I assume is a way 
of combining the above minimization and maximization problem ratios 
into a single definition)? 

The absolute performance ratio RA for an approximation algorithm A is 
given by:

RA = inf{r >=1: RA(I) <=r for all instances I of the given problem.

I understand what max {...} and min{...} mean, but I have never 
encountered inf{...} before.  

Thank you,
Bob Howell

Date: 07/07/97 at 18:48:10
From: Doctor Daniel
Subject: Re: Some more math terminology help, please

Hi Bob,

So you're working with approximation algorithms, huh?  That's actually 
what I do as well, so I'll try to be a little specific to your problem 

Actually, the definition you've given is only one of a few that are 
common for the description of the performance of an approximation 
algorithm. I'll explain the definition you give and then follow up 
with the slightly more common definition. You can find a copy of the 
definition I use in Chapter 37 of _Introduction to Algorithms_ by 
Corman, Leiserson and Rivest (which, if you're working on algorithm 
design or implementation, should probably be on your shelf.)  

Anyhow, inf means "infimum," or "greatest lower bound."  This is 
slightly different from minimum in that the greatest lower bound is 
defined as:

x is the infimum of the set S [in symbols, x = inf (S)] iff:

a) x is less than or equal to all elements of S
b) there is no other number larger than x which is less than or equal
   to all elements of S.

Basically, (a) means that x is a lower bound of S, and (b) means that 
x is greater than all other lower bounds of S.

This differs from min (S) in that min (S) has to be a member of S.  
Suppose that S = {2, 1.1, 1.01, 1.001, 1.0001, 1.00001, ...}.  This 
set has no smallest member, no minimum.  However, it's trivial to show 
that 1 is its infimum; clearly all elements are greater than or equal 
to 1, and if we thought that something greater than 1 was a lower 
bound, it'd be easy to show some member of S which is less than it.  

So that's the difference between inf and min. It's worth noting that 
every set has an inf (assuming minus infinity is okay), and that the 
two concepts are the same for finite sets.

A couple of last terminology bits: glb is another way of writing inf 
(sort for "greatest lower bound"), and then there are two other 
concepts that are similar for upper bounds, sup and lub, which are 
short for "supremum" and "least upper bound."

So what are we talking about here with these approximation algorithms? 
Basically, the absolute performance ratio is not a way to combine 
min/max problems. It's actually more important. The idea is one of the 
central ones in algorithm design: an algorithm is as weak as its 
_WORST_ example. Suppose that we have an approximation algorithm A, 
and that, over the set of all possible instances for the problem, it 
returns values of RA of:

  1.2, 1.5, 1.4, 1.2, 1.5, 1.2, 1.01, 600000, 1.3, ...

Given just this small set of data, it's clear that we can't claim that 
we should expect "good" performance from the algorithm. There's that 
bad case where the algorithm returned a result that was 600,000 times 
off the correct one; either in a minimization problem the answer was 
six hundred thousand times too big, or in a maximization problem the 
answer was way too small. So the performance ratio is at least 

Actually, that definition you gave me is a pretty bad one. Let's 
consider what it means. It means we want the greatest lower bound of 
the set of all numbers r greater than 1 such that RA (I) is less than 
r for all instances I of the problem.

What does it mean that r is one of these numbers?  If you look at it, 
r has to be an upper bound for the set of values of RA(I) for all 
instances I of the problem. So we want the greatest lower bound of a 
set of upper bounds on the performance ratio. We know that RA (I) >= 1 
always, so if RA(I) <= r, then r >= 1 always, so the qualification 
that r has to be greater than or equal to one is irrelevant. So 
really, we're looking for the GLB of a bunch of upper bounds of a set. 
Turns out, that's the lub. So: 

  RA = sup {RA(I) for all instances I.

I'm afraid I may have put way too much into the last paragraphs. 
Perhaps if you look at an example, it may be clearer:

Suppose that there is a finite set of possible instances, with these
performance ratios:

  1.3, 1.5, 1.6, 45, 1.2, 5.6

Then the set of all r's that satisfy the definition you gave me would 
be any number greater than or equal to 45, since RA(I) <= r for all 
instances I, and RA(I) = 45 for some instance. So the inf of this set 
would be 45, which is also the sup of the set of performance ratios I 

Here's an infinite example:

Suppose that the performance ratios look like this:

1.3, 1.76, 1.9, 1.6, 1.99, 1.8, 1.999, 1.45, 1.99999999999, 1.2,
1.99999999999999999999999, ... (with them arbitrarily close to 2)

Then clearly the infimum of the set of upper bounds to this set is 2, 
so this approximation algorithm would have a performance guarantee of 
2.  But that's also the supremum of the set of performance ratios.

I hope that begins to make things a little clearer, at least for the 
definition of performance ratio that you gave. 

However, that's not a fair definition of the performance ratio for 
some approximation algorithms. Suppose that I can give you an 
approximation algorithm with this guarantee:

"On an instance of size n (namely, that can be encoded in n bits), I
guarantee that the answer returned will be no more than log log n 
times the minimum."

That's a pretty good guarantee; log log n is usually quite small; 
certainly less than 10. However, according to this definition, since 
log log n does grow to infinity as n grows to infinity, this algorithm 
will have an RA value of infinity. How distressing!  

The solution to this is to allow the performance guarantee to be a 
function of the size of the input. (Incidentally, if this is getting 
too heavy, feel free to not read the rest of this message. On the 
other hand, _I'm_ currently designing such an approximation algorithm, 
so why shouldn't you?  Anyhow, be sure to read the summary at the end 
of this message.)

The problem with the definition of RA above is that it doesn't depend 
on the length of the input. So here's an alternative definition of a 
possible time bound:

We say that an approximation algorithm for the problem has a ratio 
bound of rho (n) ("is a rho-approximation algorithm" is more standard) 
if, for all instances I of size n, RA(I) <= rho (n). (paraphrase from 
p. 964 of _Intro to Algorithms_)

Note that this notation is much less precise; since log (n) is less 
than n, it means that a log(n)-approximation algorithm is also an 
n-approximation algorithm. But it does take into account my worry from 
above; my algorithm is now identifiable as a log log (n)-approximation 
algorithm, and people will feel much safer using it than they would 
have if it were advertised as having an absolute performance ratio of 

So, now that I've thrown pages and pages at you, let's summarize:

INF means greatest lower bound.  It's the largest number which is less 
than or equal to all elements of a set S.  It differs from the minimum 
in that it does not have to be an element of S, but is the same for 
finite sets. It can be written as glb.

SUP is the opposite of inf; it means least upper bound, and can also 
be written lub.

The definition of absolute performance ratio you gave talks about the 
worst possible performance of the algorithm. We need to be sure that 
we're never claiming that the algorithm does better than it might on 
some anomalous instance of the problem.  

Another definition of the performance ratio of an algorithm, which is 
far more standard (but a little newer), compares the algorithm's 
performance ratio on all instances of size n to a function of n; the 
algorithm is a rho-approximation if, for all instances of size n, 
RA(n) <= rho (n). This notation is a little less precise, but is much 
more useful for some problems when the best known approximation 
algorithm would have an absolute performance ratio of infinity, by the 
other definition.

I'd welcome further questions about these definitions, or about what 
you're reading, if you have them.  Be advised that I don't think there 
are that many math doctors who do theoretical computer science, so it 
may take a while before you get a reply. (I hesitate to point out 
that you're also a little beyond what most of our K-12 questions are 
like, as well.)

Good luck!

-Doctor Daniel,  The Math Forum
 Check out our web site!   
Associated Topics:
College Algorithms
College Definitions

Search the Dr. Math Library:

Find items containing (put spaces between keywords):
Click only once for faster results:

[ Choose "whole words" when searching for a word like age.]

all keywords, in any order at least one, that exact phrase
parts of words whole words

Submit your own question to Dr. Math

[Privacy Policy] [Terms of Use]

Math Forum Home || Math Library || Quick Reference || Math Forum Search

Ask Dr. MathTM
© 1994- The Math Forum at NCTM. All rights reserved.