In NHST practice we invariably set the confidence level at least 95% (even 99%) in order to prevent making a Type I error, which is to reject unduly the Null Hypothesis, H0, which with such cannot be larger than 5%. This prevents that, eventually, we are duped by not sufficiently large differences not to be ascribed to the natural data randomness, then risking a worthless false discovery. What is the consequence? When we enlarge the significance level we are more and more apart to the true value of the parameter which is completely unknown and therefore when we are unable to get sufficient evidence to reject the null is never synonymous to its truthfulness. ______
Even when a significant test value does occurs we must be sufficient wise to go further in order to get an estimate of the difference between data and Null, a simple matter, included in the Theory, but seldom made, it seems, at some branches of Science (Psychology, Biology, Ecology, Forestry, Economy). This fact had some Researchers to blame the Journals that had not demand such complementary concern. Of course, the difference worth in practical trends is another thing and cannot be answered by Statistics. All consequences of the change must be carefully and specifically studied, for example side drug effects, or machinery costs, workers training for improved factory production, and so on.