Drexel dragonThe Math ForumDonate to the Math Forum

Search All of the Math Forum:

Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.

Math Forum » Discussions » sci.math.* » sci.math

Topic: Proof that mixed partials commute.
Replies: 20   Last Post: Nov 22, 2013 11:57 PM

Advanced Search

Back to Topic List Back to Topic List Jump to Tree View Jump to Tree View   Messages: [ Previous | Next ]
Roland Franzius

Posts: 564
Registered: 12/7/04
Re: Proof that mixed partials commute.
Posted: Nov 21, 2013 2:25 AM
  Click to see the message monospaced in plain text Plain Text   Click to reply to this topic Reply

Am 19.11.2013 00:45, schrieb Hetware:
> In _Calculus and Analytic Geometry_ 2nd ed.(1953), Thomas provides:
> Theorem. If the function w=f(x,y) together with the partial derivatives
> f_x, f_y, f_xy and f_yx are continuous, then f_xy = f_yx.
> Both Thomas and Anton (1980) provide rather long-winded proofs of this
> theorem. These proofs involved geometric arguments, auxiliary
> functions, the mean-value theorem, epsilon error variables, a
> proliferation of symbols, and a generous helping of obscurity.
> Starting from the definition of partial differentiation, and using the
> rules of limits, along with a modest amount of basic algebra, I came up
> with this:
> f_x(x,y) = Limit[[f(x+Dx,y)-f(x,y)]/Dx, Dx->0]
> f_yx(x,y) = Limit[[f_x(x,y+Dy)-f_x(x,y)]/Dy, Dy->0]
> = Limit[
> [[f(x+Dx,y+Dy)-f(x,y+Dy)]-[f(x+Dx,y)-f(x,y)]]/DyDx
> , {Dy->0, Dx->0}]
> f_xy(x,y) = Limit[
> [[f(x+Dx,y+Dy)-f(x+Dx,y)]-[f(x,y+Dy)-f(x,y)]]/DxDy
> , {Dx->0, Dy->0}]
> The only caveat is that the rules for limits, such as /the product of
> limits is equal to the limit of the products/, are stated in terms of a
> single variable. For example:
> Limit[F(t) G(t), t->c] = Limit[F(t), t->c] Limit[G(t), t->c]
> Whereas I am assuming
> Limit[F(x) G(y), {x->c, y->c}] = Limit[F(x), x->c] Limit[F(y), y->c].
> I argue as follows. The statement that x->c as y->c can be formalized by
> treating x and y as functions of t such that
> Limit[x(t), t->c] = Limit[y(t), t->c] = c
> |F(x)-F(c)| < epsilon_F ==> |x-c| < delta_F exists
> |G(y)-G(c)| < epsilon_G ==> |y-c| < delta_G exists
> |x(t)-c| < epsilon_x ==> |t-c| < delta_x exists
> |y(t)-c| < epsilon_y ==> |t-c| < delta_y exists
> Now epsilon_F ==> delta_F can be used as delta_F = epsilon_x which
> implies delta_x > |t-c| exists. So
> Limit[F(x(t)), t->c] = Limit[F(x), x->c], etc.
> It follows that
> Limit[F(x(t)) G(y(t)), t->c]
> = Limit[F(x(t)), t->c] Limit[G(y(t)), t->c]
> = Limit[F(x), x->c] Limit[G(y), y->c]
> Am I making sense here? I feel as though I am trying to prove the
> obvious, but it is not obvious how to prove it.

Since you mention the existence and continuity for all partial
derivatives up to second order you have to use the equicontinuity as a
condition that limits of continuous functions are continuous.

Equicontinuity means exactly what you have written implicitely, namely
that your epsilon/delta-_functions_ of the other variable are constants,
independent of the free other variables.


lim 1/h( f(x+h,y)+f(x,y))

is a continuously indexed limit of functions in the free variable y.

A general limit of continous/differentiable functions may develop
singularities as lim_n->oo (x^n) because for every n you need another

The commutativity of independent differential operators is at the heart
of functional analysis with the rules

d^2 =0 for exterior differential forms or

f_xy - f_yx=0 for the algebraic setting

or rot grad f=0

for the vector analysis setting. So its a much better learning strategy
to look for smoothness definitions in the contexts of function spaces in
contrast to painfully filtering a general sufficient condition for the
interchageability of pointwise limits for real functions in the supremum


Roland Franzius

Point your RSS reader here for a feed of the latest messages in this topic.

[Privacy Policy] [Terms of Use]

© The Math Forum 1994-2015. All Rights Reserved.