Randy wrote: > So if I may summarize (boy, do I know how to court danger), the value of > CC seems to lie in: > > 1) identifying a problem's theoretical omega (lower bound) in the > solution space (like sorting's NlgN), and > > 2) characterizing classes of problems as being computationally > equivalent (polynomially equivalent?), which is useful because: > > 3) when problems A and B are in the same CC class, both are > inter-reducible polynomially, therefore problem A can be solved using > problem B's algorithm, at no worse than a polynomially greater runtime > cost.
These are all aspects of complexity theory, but the field is still larger than that. For example, you are just considering worst-case time complexity and polynomial time reductions. In addition, you have:
1. Different complexity measures like average-case complexity. The average case and worst-case can differ considerably. The biggest difference I am aware of is the group word problem, where the problem is undecidable, but there is an algorithm with quadratic average case complexity.
2. Different resources, like space complexity. In randomized algorithms, the number of random bits required is another resource that people in complexity theory try to minimize.
3. Different notions of reducibility, like Turing reducibility or log space reducibility.
There are also many different complexity classes and notions of computability. You also have research into "descriptive complexity theory" where people explore connections between logic and complexity. Sometimes people even look at particular classes of algorithms for solving a particular problem like the simplex algorithm for linear programming and the DPLL algorithm for boolean satisfiability.
Some of this stuff may be outside what most people consider mainstream complexity theory, but it all depends on basic notions like NP-completeness and reducibility.