firstname.lastname@example.org wrote: > In comp.theory Randy <email@example.com> wrote: ... >>I think any algorithm that contains uncertainty (indefinite input) can >>still be assessed for time or space complexity. Whenever the problem >>cannot be solved, its complexity must remain undefined. However, it >>seems to me that when the problem can be solved, its complexity can be >>characterized and maybe quantified. > > > Noone was talking about indefinite input. The Halting Problem > is unsolvable and there is nothing indefinite or uncertain about > the input.
The halting problem is entirely about what is known about: 1) the program's input or 2) the program's search space. If there is no input, or the input is completely known, or the search space is completely known, then the program is deterministic and can be guaranteed to halt.
Halting is only undecidable when the input that is incompletely known (e.g. infinite), or has an incompletely known effect on the program, as in external instruction streams that might include or induce infinite loops. Halting is also undecidable when the program's search space may not include solutions that satisfy the program's given constraints and is infinite, potentially causing the search to continue forever.
Not being a mathematician, I can only guess that at its essence, the halting problem could be regarded as a function that fails because 1) its domain is incompletely known or 2) its range is incompletely known. If that simplification is fair, then undecidability is inextricably linked to incomplete knowledge, ergo uncertainty.
This is where others who have explored the boundary between certainty and uncertainty come in (in my example, a few AI folks). In practice, it is possible to manage a program's inability to halt or its potentially infinite search, just in the way that Operating Systems detect deadlocks and kill nonresponsive processes that are stuck in infinite loops. Just as in probabilistic algorithms, the criterion for "halt" is redefined no longer to be "optimal", but "close enough". Such algorithms will then have measurable asymptotic complexity measures.
Yes, the imposed limits on the algorithm artificially reshape its complexity. But so what? Just pretend the core algorithm and its keeper are one, and voila, you've built a compound program that 1) halts and 2) has a complexity measure.
For fun, to confirm that input is not central to the halting problem, try removing the word "input" from the Wiki entry:
> >>For example, a UTM that uses a small number of states will have a lower >>time complexity than a UTM using (or that must explore) a larger number >>of states. > > > That seems backwards for UTM. In general the number of states a TM > has will have nothing to do with the complexity of its running > time. I really doubt UTM's are an exception. > > Stephen
My point is that the size of the problem's input space or search space or even the UTM's execution space will dictate the effective runtime. It the UTM is based on a quantum computer or a CA, which must explore more states than another UTM to get the same job done, perhaps even nondeterministically, then the speed of the machine will matter too.