Types of Statistical Errors:

They are 3 types of statistical errors:

1-    Type I error: (symbolized by α and equivalent to a false-positive result) occurs when one incorrectly rejects a null hypothesis that is actually true (i.e., there is no difference in the larger population, Example when examining the effectiveness of an experimental antibiotic, the null hypothesis would be that the drug has no effect on a disease in the larger population, if one falsely rejects the null hypothesis, one would claim that the drug has a significant effect on the disease as measured by one's study sample, when in reality, the antibiotic is not effective against the disease in the larger population. The probability of committing a Type I error is a function of one's level of statistical significance. The conventional range for significance is between 0.01 and 0.10, with 0.05 representing the value seen in most published research studies. Assuming one has obtained an adequately sized and representative sample from the larger population, Type I error generally occurs due to random chance. Multiple testing may also increase the chance of Type I error because making many different comparisons between groups often results in at least one comparison being falsely “significant.”  (Parampreet Kaur, Jill Stoltzfus, 2017)

Type 1 error causes:

·       Type 1 error is caused when something other than the variable affects the other variable, which results in an outcome that supports the rejection of the null hypothesis.

·       Under such conditions, the outcome appears to have happened due to some causes than chance, when in fact it is caused by chance.

·       Before a hypothesis is tested, a probability is set as a level of significance which means that the hypothesis is being tested while taking a chance where the null hypothesis is rejected even when it is true.

·       Thus, type 1 error might be due to chance/ level of significance set before the test without considering the test duration and sample size. (Sapkota, 2020)

2-    Type II error (symbolized by β and equivalent to a false-negative result). It occurs when one fails to reject a null hypothesis that in actuality is false. For example, if the experimental antibiotic truly affects a disease in the larger population, but one falsely claims that it does not, as measured by the study sample, Type II error is the result. The probability of committing a Type II error is a function of power (symbolized by 1- β). The conventional range for Type II error is between 0.05 and 0.20, with 0.20 representing the standard value in published studies (meaning there is an 80% chance of correctly detecting a difference in one's sample that actually exists in the larger population). The main reason for Type II error is an insufficient sample size for detecting an effect size of interest. For example, one may wish to test whether a drug reduces disease incidence in the treatment group by 10% compared to the control group. Here, the effect size would be 10%, and one's sample must be large enough to detect this difference to avoid a Type II error. Smaller effect sizes require larger samples, so one must exercise great care in identifying the appropriate effect size for one's study objectives (e.g., from previous research, pilot study findings, and/or one's clinical observations). (Parampreet Kaur, Jill Stoltzfus, 2017)

Type 2 error causes:

·       The primary cause of type II error, like a Type II error, is the low power of the statistical test.

·       This occurs when the statistical is not powerful and thus results in a Type II error.

·       Other factors, like the sample size, might also affect the results of the test. (Sapkota, 2020)

-        When small sample size is selected, the relationship between two variables being tested might not be significant even when it does exist.

·       The researcher might assume the relationship is due to chance and thus reject the alternative hypothesis even when it is true.

·       There it is important to select an appropriate size of the sample before beginning the test.

3-    The relation between Type I and Type II errors:

Although they represent different concepts, Type I and Type II error are related in that reducing Type I error tends to increase Type II error and vice versa. By reducing type I error (typically by decreasing the level of significance, such as from 0.05 to 0.01), it becomes more difficult to reject the null hypothesis of “no difference” even if there really is a difference in the larger population (which would result in Type II error). In contrast, by increasing the Type I error rate or level of significance (such as from 0.05 to 0.10), one decreases the likelihood of falsely rejecting the null hypothesis of “no difference” and concluding that there truly is a difference in the larger population, which reduces the probability of Type II error. [Figure 1] illustrates this relationship by showing how increasing or decreasing alpha (Type I error), or beta (Type II error) leads to a respective increase or decrease in the other value. (Parampreet Kaur, Jill Stoltzfus, 2017)

4-    Type III error: occurs when one correctly rejects the null hypothesis of no difference but does so for the wrong reason. One may also provide the right answer to the wrong question. In this case, the hypothesis may be poorly written or incorrect altogether. For example, a drug may reduce disease in the larger population, but it fails to do so in one's study sample because the hypothesis was not well conceived. To avoid Type III error, one should take great care when collecting, recording, and analyzing data from the population of interest, since this type of error may negatively impact medical practices and health policies if one adopts an inappropriate treatment plan or course of intervention due to faulty data. (Parampreet Kaur, Jill Stoltzfus, 2017)

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics