>>Return to Tell Me About Statistics!
“Hypothesis testing” can be misleading because it uses a sample. It can be divided into two types based on the error content.
Continuing from the previous “Type I error,” this time we explain the “Type II error.”
Type Ⅱ error
The null hypothesis is that the new and existing drugs are equally effective, and the alternative hypothesis is that the new drugs are more effective than the existing drugs.
For example, if p = 0.06, the null hypothesis cannot be rejected, and the alternative hypothesis that the new drug is more effective than the existing drug cannot be validated.
The problem is that even though the effects of the new and existing drugs are not the same (the null hypothesis is not true), the mistake is overlooked, and the drug is judged to be “ineffective.” This type of error is called a type-II error. Type II errors are called β errors, and the probability of creating a type II error is expressed as β, as shown in the figure.
[Figure] Relationship between α error and β error
The significance level (α) is the threshold for determining how small the p-value is to be considered significant. α is usually set to 0.05 (5%) (so the assumption is that an error of about 5% is acceptable).
A p-value smaller than α is considered significant. However, α is also the probability of making an error in rejecting the null hypothesis when it is true. Decreasing the value of α decreases Type I errors, but increases Type II errors. Conversely, increasing α increases Type I errors, but decreases Type II errors.
The only way to reduce these errors is to collect larger samples. The larger the sample size, the smaller the value of β, and the greater the statistical power (power, 1-β).
We will discuss power in the next section.
>>Return to Tell Me About Statistics!
Comments are closed