In article <firstname.lastname@example.org>, email@example.com (Gian Carlo Macchi) wrote:
> In article <hpa.31ba9c8e.I.use.Linux@freya.yggdrasil.com>, > firstname.lastname@example.org says... > >... > >Blah. This is, IMHO, not much more than IBM still trying to justify > >COBOL. > > > Only for info. When IBM designed PL/I (that *should* become the one > language), also decimal floating point types were taken into account, > besides decimal fixed point. > > Decimal fixed point types were implemented on some computers via decimal > integer arithmetic (e.g. in IBM 370 serie and derived), but as far as I > know, decimal floating point types were neither implemented based on > hardware, nor simulated via software, and remained pure "artificial" types. >
But there were computers (from an era long long ago!!) that used BCD floating point, even in hardware. You could get up to 99 digits of accuracy if you felt like it, but the fortran compiler only went up to 28.
Yes, the IBM 1620 had hardware floating point and it WAS bcd. Yes they had software to use the same data that the floating point hardware used.
All back in the 60's. You see the 1620 was IBM's first "personal" computer. You used it up close and very personal. Of course, you ended up fighting for console time, and trying to debug as fast as possible. Interesting times...
You're right in one respect. The 360/370... machines don't have BCD floating point.