But there are "scientific" rules for rounding that should be used. These rules are based on where the numbers came from. The rules are actually math and are not arbitrary. They should be used starting early on. So if the number is 5.0 you start the students with the idea that it has 2 significant figures and that answers will be similar. They should be introduced to the idea that 5.0 could be any number between 4.05 to 5.04999... assuming the rule you round up for the digit 5.
Now of course this does not apply to counting numbers which are always whole numbers exactly. But measurements are all subject to uncertainty. And for every counting number there may be an infinite number of measurements, so measurements are extremely important.
Of course the more exact rules for addition and subtraction can be introduced later. But they should be introduced by an experiment where students see how the uncertainty propagates to the result. In other words they put in maximum, minimum, and average values for the imput numbers and look at how the results vary. For those who are going on to careers where this matters, a good statistics course can then be understandable.
All too often math is taught with the fiction that all numbers are exact, and in real life this is practically never true. Even when counting things, there is often an uncertainty. An example here is the frog problem: A biologist catches 35 frogs from a pond, bands them and puts them back. Then a few days later he catches 35 more frogs and finds that 9 have bands. How many frogs total are there in the pond. The answer can only be found by proportional reasoning, and there is an uncertainty on the number 9, so the final answer has an uncertainty. There is no uncertainty on the number caught. The uncertainty on 9 is around 3, so the answer could be off by 1/3 more or less.
OK, it is too much to expect an elementary student to know the uncertainty, but they should begin to understand that there is an uncertainty to this number so if they give the answer as 136.1 frogs it is clearly wrong. According to the practical rules of sig figs they should give 140, but 136 is much better than 136.1. Actually 9 really has almost 2 significant figures as it results in almost the same uncertainty as 10. One would never give the nonsensical answer 136.111. So rounding to the nearest thousandth is a very stupid rule.
Incidentally the frog problem is only solvable by about 25% of high school seniors without being told how to do it, because it requires proportional reasoning ability. Proportional reasoning ability requires the students to recognize when things are proportional without being clued in by it being in a group of proportional problems, and then the ability to correctly set up the proportion or fraction. If you just tell them how to do it, the idea will not transfer. But if they struggle with it and tell each other, they do improve.
John M. Clement Houston, TX
> > Speaking of rounding............. > > I know a math teacher who gave an answer with the instructions stating > that > every answer must be rounded to the nearest thousandth. > > Don't such rounding instructions carry an implicit "as needed?" > > Case in point: One of my tutoring students came up with the answer of > "5" on a test question, which was correct. It was as if he divided 2 > into > 10. > > However, his teacher took off a point because he did not express his > answer as 5.000. >