Being all Probability Theory made by deductive way, except the first principles as, for example, the probability of an event, and Statistics developed through inductive reason, a rather different option, it is not surprising they went in collision. It is well known that Bayesian and Frequentist  are seldom in accordance how to tackle actual problems in Statistics, and that the later ones uses almost always induction, I guess. This post was triggered by a common statement, a remembrance/cautionary or a advising/admonitory to Frequentist people? Quoting : ? In What Sense ?Confident?? -- Confidence level 1 - alpha is not probability that theta is contained in confidence interval (theta is assumed fixed, not random variable). -- Repeated sampling experiments On average 100(1 ? alpha)% of confidence intervals obtained should include the actual value of theta.? This leads us to a similar question that is: if I get, by a real Significance Test, a p-value less than alpha, what I conclude? Critics answer immediately: nothing. Just because alpha is a preset constant, say 0.05, and p-value is the value to which there is the probability alpha do get a larger one if the null is true. But by induction the matter presents differently . . . Suppose, by an instant, that H0 is true. If so the p-values are evenly (uniformly) distributed. Because I got p-value < alpha at my test it is very unlike that I obtain such a result that, unless H0 is false. This is HUMAN, let me say. Can, notwithstanding, the null be true? Of course, but it was bad luck, indeed. So, generally speaking, the condition p-value < alpha is a strong symptom, I mean, that H0 should be rejected.