I found the information that you provided interesting but unless I am reading it incorrectly, your discussion of significant digits is more about what happens with representing an irrational value or a rational value that is too long for a calculator's display than about the idea that I was referring to in the MidPoW.
The MidPoW problem that prompted my thinking was something different and perhaps I am incorrect to call this a question of "significant digits"? I'll provide one example of a calculation called for in the problem I am talking about and then maybe you can let me know whether this also would be an example of considering "significant digits."
The question is if the given 1500 meter times would break the 25 mile per hour speed limit. One of the given times was 2:20.8. So, one way to figure it would be to find the speed of the skater given the time and the distance. (This is an approach that middle school students might try.)
This time 2:20.8 (minutes: seconds) is 140.8 seconds If you divide 1500 meters by 140.8 seconds you get 10.653409 meters/second Because you start with 140.8 (4 significant digits) then the thought is you would round off that answer to 10.65 meters/second.
I could continue, but it is that "rounding off" part that I'm interested in discussing.
><SNIP>************ >1. At what age do you teach your students about significant digits? >What age do you think this is normally taught? > >2. Do you have a favorite activity that you use to teach your >students about significant digits? ><unSNIP>******************** > >There are several issues that are related to the quality of approximations. >Among them are the distinctions of precision, accuracy, significant digits, >rounding techniques and tolerance of measuring devices(this last is usually >omitted). > >Precision of a decimal representation of a real number refers to the number >of digits displayed in relation to the decimal point. Often this is given >by the total number of displayed digits and the number of digits following >and preceding the decimal. We expect most computers and calculators to work >with and display finite precision decimals rather than real numbers. >(Whether they actually work with another form of representation such as >hexidecimals or have symbolic manipulation capabilities for symbols such as >sqrt(2)is another issue). The limitation of finite precision decimal >arithmetic must of necessity introduce errors in attempts to represent >irrational values and rational values with "long" decimal formats. Such >values cannot be exactly represented with finite precision decimals. Hence >we are forced to use approximations for arithmetic. We would prefer that >approximations be "good." > >On the other hand, accuracy measures the error in an approximation. Consider >two approximations for pi: 3.1 and 4.7893. The second approximation is >more precise than the first approximation. After all the first >approximation consists of two digits, one on which follows the decimal, >while the second consists of 5 digits, 4 or which follow the decimal. >However, 3.1 is a more accurate approximation for pi than 4.7893: because >3.1 is "closer to pi, it has a smaller error. > >Suppose that a rational number (but not an integer) has an exact finite >precision decimal representation in some system. Examine the digits of this >representation from left to right. The first non-zero digit you encounter is >the Most Signiifcant Digit. For example, 3 is the most significant digit in >3.124, but 2 is the most significant digit in 0.0002134. All the digits that >follow the most significant digit are significant. > >The task is not so easy with finite precision decimal approximations. For >example in the appoximation for pi 3.1415278, 3 is the most significant >digit. But the digits that follow 3 are significant only to the point where >they fail to come within +-1 of the corresponding digit in the infinite >decimal representation. (For measurement problems, we must also consider how >well the device measures so that we do not exceed the tolerance of the >device AND the skill of the human operator. We might even worry about errors >in measure enough to shift to sample measures with an average within the >bounds of tolerance measured by standard deviation. But we probably would >not worry about that in an introduction). Hence, in 3.1415278, 3, 1, 4, 1, 5 >are all significant. 5 is the least signigficant digit. But 2 was poorly >chosen and does not lie within +-1 of the correct value at that decimal >position. Therefore, 2, 7 and 8 are not significant. > >One measure of the accuracy of an approximation is the number of significant >digits. > >So for the first question: At what age should this be taught? What is >normal? I have no feeeling for this topic in terms of age. Rather, I would >choose to begin this discussion at the first time someone notices that their >calculator is "rounding" for them. Until then I would tend to make rounding >more informal as a way of estimating rather than call it approximating. > >When approximation is broached, I would want to discuss at least 4 different >kinds of rounding (based on the +-1 indication of significance): round up >(ceiling), round down (floor or truncate), simple round off, and scientific >round off (to ameliorate accumulated round off error). I would also point >out that most calculating machines maintain an internal precision greater >than the displayed precision, so that the type of discrepancies you >encountered in early versus late roundoff are not so obvious. > >For the second question: > >Some of difficulties associated with computer arithmetic come from rounding >(some exact finite precision decimals, turn out to be infinite >hexadecimals). Calculations involving quotients of very small differences >(sound familiar?) can also produce excessive round off error. But I don't >think I would want to discuss these in an introduction. However adding, by >hand finite decimal representation of 1/3 + 1/3 + 1/3 might be interesting. >However, my favorite activity for significant digits would probably revolve >about calculator usage. Except for practice, it seems to me that the best >use would not be to round for calculation, but to round after calculation as >a recognition of errors based on precision and significance issues. For >example The area of a square exactly 3.1 cm on a side is 9.61 cm^2. However, >if 3.1 is an approximation to the nearest tenths (perhaps an approximation >of pi) for reasons of precision limitations or human or machine error, then >an area of 9.61 cm^2 implies that the area is measured to a precision (2 >decimal places) greater than the original measures with an associated >implication of greater accuracy. Usually, we would then round the area 9.6 >cm^2 to maintain conformance with the original precision. Note that the >precision is usually assumed to be the same for all numbers in a calculator >without regard to how many digits you actually enter: i.e., finite precision >decimals (like numbers in most textbooks) are treated as exact. Also, we >expect the answers produced by the calculator to be reduced to the same >level of precision as that assumed for the operands. > >However, it is not clear to me at what age the subtle distinction between an >exact representation given by 3.1 and an approximation represented by 3.1 >would make good sense. I do think that some feeling for these distinctions >could be done by having students measure their heights to a nearby inch (use >a ruler marked only in inches, no fractions), then discuss when they could >tell from the numbers who was taller and when there was uncertainty. > >cordially, >-Ron > > > >To unsubscribe from this group, send an email to: >email@example.com > > >Your use of Yahoo! Groups is subject to http://docs.yahoo.com/info/terms/