Jarque-Bera normality test has significant p-values even when there is skewness and kurtosis. Does that mean test is infering data distribution is approximately normal?
Jarque-Bera Normality Test in R – Step-by-Step Guide
hypothesis testingnormality-assumptionr
Related Solutions
p-value = 0.5329
Does it mean that the probability to discard the normality hypothesis
A p-value is not "the probability to discard the hypothesis". You should review the meaning of p-values. The first sentence of the relevant wikipedia page should help:
the p-value is the probability of obtaining the observed sample results (or a more extreme result) when the null hypothesis is actually true.
(NB: I have modified the above link to the version that was current at the time I wrote the answer, as the opening paragraph of the article has been edited badly and it's presently - June 2018 - effectively wrong.)
It goes on to say:
If this p-value is very small, usually less than or equal to a threshold value previously chosen called the significance level (traditionally 5% or 1% [1]), it suggests that the observed data is inconsistent with the assumption that the null hypothesis is true
This is quite different from "probability to discard the hypothesis".
is 53.29%?
A p-value around 53% is quite consistent with the null hypothesis.
(However, this does not imply that the distribution that the data were supposedly a random sample from is normal; it would be consistent with an infinite number of non-normal distributions as well.)
Firstly note that failure to reject a null doesn't mean the null is true -- so just because a goodness of fit test came out perfectly consistent with a normal, that doesn't imply that the data were drawn from a normal distribution.
However, in the case of this particular test the connection is even more tenuous -- even if the population skewness and kurtosis were the same as for a normal, it still doesn't imply normality. There's an infinity of distributions that have skewness 0 and kurtosis 3. A number of examples can be found on site here with a bit of searching.
The closer the Jarque-Bera test statistic* is to zero, the closer the sample skewness and kurtosis are to the values 0 and 3. Rejection indicates inconsistency with those values (and hence with normality), failure to reject doesn't imply normality.
You could perhaps interpret it as a weighted sum-of-squared deviations from the expected cumulants under normality. (At least that would be the interpretation in very large samples - the values are asymptotic, but the approach of the joint distribution to the asymptotic distribution comes in quite slowly; the distribution shows clear signs of dependence even for samples of size 300 for example)
*(that name for the test is popular among econometricians but it is wrongly named, since the statistic was proposed, used and discussed decades before they came along).
Best Answer
You may have misunderstood something about hypothesis testing or maybe about goodness-of-fit tests, or perhaps specifically about the "Jarque-Bera" test*.
Note that you reject when the p-value is small, when happens when the skewness and kurtosis differ from their expected values under normality.
The test statistic is of the form (from page 1 of Bowman and Shenton's paper):
$$\frac{n}{6} S^2 + \frac{n}{24} (K-3)^2\,,$$
where $S$ is the sample skewness and $K$ is the sample kurtosis (i.e. $K-3$ is 'excess kurtosis')
The null hypothesis is of normality, and rejection of the hypothesis (because of a significant p-value) leads to the conclusion that the distribution from which the data came is non-normal.
The test is specifically looking for skewness and kurtosis that is different from that of the normal (it squares the standardized deviations and sums them) and will tend to be significant when skewness and kurtosis deviating from the values at the normal are present.
Which is to say - when you get a significant test statistic with this test, it's explicitly because the sample skewness or kurtosis (or both) are different from what you expect to see with a sample from normal distribution.
Take care, however -- the asymptotic approximation on which the test is based comes in only very slowly (see the image near the bottom of this answer; also see here and here for some additional points). I wouldn't rely on it without simulating the distribution of the test statistic unless $n$ is a good deal larger than say 100.
Here's an example of the joint distribution in normal samples at n=30 (simulated values):
-- as you see, not at all close to bivariate normal.
*(The development of the test precedes their 1980 paper; it shouldn't be named for them. D'Agostino & Pearson (1973), and then Bowman & Shenton (1975) were there well before for example, and the latter discussed the relevant issues in more detail (including the slow convergence and the shape of the joint distribution in small samples - though their diagrams seem as if they may contain an error), but one can readily see that the idea of basing a goodness of fit test on skewness and kurtosis together comes even earlier than those prior papers.)