In looking at some elementary linear algebra textbooks I noticed that although they cover matrix inversion in the standard way, using row reduction to go from [A | I] to [I | A^(-1)] , when they cover inversion of block matrices they don't use the obvious analog. That is, for example to find the inverse of say [[A,B],[0,C]] (with appropriate conditions) they don't write this matrix augmented by the block identity, [[I,0], [0,I]], and then do block row operations .. for example multiplying the first row by A^(-1). The usual approach these texbooks use is to compute the product of the given matrix with some block matrix, say [[X,Y],[Z,W]], and equate the result with the block identity and solve the resulting matrix equations. The first method I described seems simpler and has an obvious connection with the method used to find inverses of numerical matrices (Gauss-Jordan). It is true that the block row operations require a bit more care, particularly with the order of multiplication, but is there a more fundamental reason why these textbooks cover this topic the way they do?