In classical Null Hypotheses Statistical Tests the ONLY result/conclusion we can attain, given data and confidence level: ___we obtain sufficient evidence the null is unlike ___we does not.
Any other one more *specific* sometimes found in text-books /papers is unlawful/absurd/wrong.
The difference of two means
The true difference between Population means. D= muX- muY is accurate (ACCURACY) relative to the observed difference, D´= xbar-ybar, if | D - D´ | is short. By the other hand high PRECISION demands that the Confidence Interval CI = [u, v] containing D´ with a probability alpha ( = 0.95, so) be short.
ACCURACY and PRECISION have rather different features and controlling ways: ____ACCURACY: we can establish the probability to find D inside the confidence interval, u<= D <=v, say 95% Confidence Level. In contrast PRECISION - how wide or narrow is the interval - is left to the Goods of Chance to decide: we are unable a priori to preview because are dispersion data immediately dependent. It can happens even that a high precise value be be useless/trash because the CI does not contain D . . . a present type I error (fee figure above).