
Re: Structured Programming
Posted:
Mar 6, 2014 10:18 AM


On Mar 5, 2014, at 4:13 AM, kirby urner <kirby.urner@gmail.com> wrote:
> If we're going to talk about the mathematical basis for languages, especially those of the FP species, then lambda calculus is a starting point. Alonso Church etc.
If we are going to talk about the mathematical basis of programming languages, I would like to know what we mean by ?basis?. Virtually everything has a mathematical basis if we mean that our models of virtually everything involve mathematics. That would seem to be a trivial point to make. What I think we mean, or what was meant here in this discussion, is influence, not basis. But influence how? Did formal theory influence PLD? Or did prior art influence PLD?
I think context free grammars (CFGs) are a good case study. Did the formal theory of CFGs influence programming languages? Not in the beginning because the formal theory of CFGs wasn?t developed till the mid 50?s and many languages had already been developed before then, including FORTRAN. Did the formal theory of CFGs even influence BNF? I seriously doubt it for a couple reasons. First, I don?t see any indication in the literature of the time (the ALGOL papers) that mention such an influence. And more importantly, the work they were doing with ALGOL, all the notes, and all the pseudo notation in those notes seem to be more than sufficient inspiration alone. I think Backus, along with his prior developments, saw through all that to a neat generic way to document the syntax of ALGOL.
Does this mean that the PLD creators had, amongst themselves, created a formal (or even informal) theory of CFGs? No. It means only that they had developed an intuitive sense for CFGs, which anyone working on PLD did. It is a sense that comes pretty naturally with PLD and compiler design. I am sure that they had discussions which they later would realize related to CFGs, but most of their discussions are centered on semantics, technological hurdles and aesthetics, not a formal or even informal theory of CFGs.
I also don?t see that the formal theory of lambda calculus influenced anything yet. At least, it isn?t evident in the record that this was high on their mind. Procedures and functions were still thought of as separate things. Likewise functions and variables. Again, there were many other things on their mind, technological things. Like dynamic arrays and choosing a character set that was supported on the hardware of the time. A computer function involves so many aspects that mathematical functions do not, like reentrancy, data types, parameter passing mechanisms (by value, by name or by reference), return mechanisms, stack usage, heap usage, memory allocation, garbage collection. Stuff like this seems to dominate the discussions, as well as aesthetics.
The influence I see were the computational paradigms of mathematics, but the ?machine" adds so much technological overhead to those paradigms that it becomes more of an engineering feat than the abstract endeavor it is in mathematics. I think formal CS is a branch of mathematics, while the actual implementation, which includes hardware design, software design and language design, is a branch of engineering. And the connections between the two are no different than those between other branches of engineering and their formal backends.
Btw, some interesting tidbits from all of this research.
It seems that the block structure came from the european side. I did mange to pinpoint the invention of ?scope??
http://portalparts.acm.org/1070000/1060889/fm/frontmatter.pdf?ip=99.75.62.246&CFID=416802375&CFTOKEN=37949616 7.33 Suggestion from A. Van Wijngaarden and E.W. Dijkstra In the begin of a procedure  the heading  automatically a new level of nomenclature is introduced?
There were also a pair of documents published in 1978, one talking about the American side of the ALGOL60 discussions, and one the European side of the discussions. Unfortunately I couldn?t find public links and the documents are lengthy. They are on ACM and you can get a free 5 minute preview. The free ALGOL bulletins (like the one above) provide some of the discussion details.
One big difference between the two sides was past experience. The europeans had just really started building computers in the 50?s, mostly in universities (the americans did this a decade earlier). The americans on the other hand already had a substantial computer industry by this time. I assume this was due to the effect of WWII on Europe.
The europeans had very little experience with auto coding (writing assemblers) while the americans had quite a lot, which is why FORTRAN came so early (Backus created it in 1953). The europeans were well aware of all of this, and by the late 50?s FORTRAN had quickly become the standard for scientific computing on both sides.
By skipping all of that (assembler) development, I think the europeans had an advantage in thinking about what the next programming language would be. They were not as entrenched in industry standards as the americans at this point. Nonetheless, it wasn?t a home run. ALGOL58 was incomplete and they tried to fix this with ALGOL60. They fully intended that ALGOL supplant FORTRAN, and in the 1978 papers they fully admit they failed. Even PL/1, an ALGOL derivative created by IBM, couldn?t supplant FORTRAN, also an IBM creation.
I think one of the reasons these ?advanced? programming paradigms couldn?t kill FORTRAN was just that, they were advanced. FORTRAN was simple and got the job done. I did computational programming in both C and FORTRAN in the early 80?s and I preferred FORTRAN because you had to think less about programming. However, soon after I became a professional programmer and C and C++ became second nature I preferred C and C++ greatly over FORTRAN. But there is a steeper learning curve with C and C++ than there is with FORTRAN. You deal more with ?programming? in C and certainly C++ than you do in FORTRAN. Someone that just want?s to get the job done and not learn programming will get it done sooner with FORTRAN.
Also, structurally FORTRAN is close to assembler and it is easy to compile and easy (for the compiler) to optimize (for speed). This was a very important consideration back then. It wasn?t until the 90?s that general purpose machines, especially PC architecture, became powerful enough to fully realize our best ideas in programming. And when I say powerful enough, I mean the whole system, to include large and very fast hard drives, a lot of fast ram and very fast CPUs with very fast memory caching. These best ideas, many of them originating with ALGOL, are not easy to implement. They involve layers of compilation and involve runtime environments that can be as complex as the OS. Memory allocation and garbage collection were, I think, the longest lasting hurdles and I consider their solution (in the late 1990?s) the point at which we finally caught up with our best ideas.
Bob Hansen

