For typical hypothesis testing, we want small significance, which means a small rejection region. Thus, and value of the statistic that falls in the rejection region is less likely due to chance (in combination with the truth of H0). In testing a drug for a medical effect, that makes sense because we often want to demonstrate an effect, and H0 is typically the absence of an effect. For values of the statistic that fall in the small rejection region, we can say that if H0 is true, it is highly unlikely for us to get this value for the statistic. The smaller the significance, the smaller the rejection region, and less we are able to attribute the chance any values in the rejection region.
For normality, we often want the opposite. We want H0, which is normality of the residuals. We can not accept H0 to any degree of confidence using this setup of hypothesis testing, but at least we can make it very easy to reject H0 so any non-rejection of H0 is seen to be well founded. This implies large rejection region and high significance. In fact, we might want to a 95% rejection region, the counterpart of the wanting a 5% rejection region when the intent is to demonstrate that rejection of H0 is not due to chance.
Is this reasonable? I ask because the table in the above link shows significance values 1%, 2.5%, 5%, 10%, and 15%. These small values seem more like the values that one might be interested in when wanting to demonstrate valid rejection of H0.