In <email@example.com> "Bruno Luong" <firstname.lastname@example.org> writes:
>kj <email@example.com> wrote in message <firstname.lastname@example.org>... >> >> >> One of the most often cited coding "best practices" is to avoid >> structuring the code as one "large" function (or a few large >> functions), and instead to break it up into many tiny functions. >> One rationale for this strategy is that tiny functions are much >> easier to test and debug.
>I would be careful when applying blindly this "best practice". MATLAB have huge running performance penalty due to function overhead (as opposed to other programming language as C, fortran, ...)
>You will slow down the code by putting simple (scalar) arithmetic expressions in function. If the expression is vectorized and supposes to operate on big array, then it's OK to put it in the functions.
To be sure, the benefits of using many tiny functions over a few big ones need to be weighed against the performance costs of the additional function calls. This is always true.
In practice, however, I've found it is possible to narrow down significantly the performance-critical parts of the code, outside of which, the added cost of the additional function costs becomes negligible. This usually creates a fairly large scope of applicability for the "tiny functions" strategy. (I.e., a 100-line function may be boiled down to ~10-20 2-3 line functions, and one 15-line performance-critical computational "core".
And even when performance argues against breaking down a function any further, my practice has been to go ahead and code the slow-but-testable implementation, structured around tiny functions, and use this slow version to check the correctness of the fast, production version of the code. So even when performance is a consideration I find the tiny-functions paradigm extremely useful.