On Thursday, September 19, 2013 7:26:55 AM UTC-4, Peter Percival wrote: > Dan Christensen wrote: > > > On Wednesday, September 18, 2013 7:11:53 PM UTC-4, Rotwang wrote: > > > > >> > > >> will follow from people defining exponentiation in the usual way? > > > > > > Whatever consequences may arise from a calculation that results in a > > > value of 1 when it should be 0. The result could be catastrophic. > > > > When "should" 0^0 = 0? >
The value of 0^0 should not depend on the context.
> > > This is what has happened. You were writing a computer program and you > > wrongly believed that exp(0,0)--or whatever the notation is--would > > return 0. It didn't, it returned--quite properly--1. Your program > > being buggy is not a catastrophe. The thing to do with bugs is to fix > > them[*], _not_ try to "fix" mathematics. >
Let's see your proof that 0^0=1. Sorry, simply defining it as such won't do in this context.
> > > > The most well known theorem that makes use of 0^0 is probably the > > > Binomial Theorem. And it can easily be restated to avoid its use, > > > e.g. by stating at the outset that (x+0)^n = x^n, etc. > > > > That (x+0)^n = x^n follows from the definition of addition.
> Note, btw, > > that the definition of + begins by saying what x+0 is, it doesn't start > > with x+1 or x+2. >
Technically, + on N is not simply defined. It is a construction based Peano's Axioms (or their modern equivalent) and set theory. You could probably construct an exponent function on N with 0^0=1, but you could also construct one with 0^0=0. And they would agree on every value but that assigned to 0^0.