15 Statistical Bias Examples

15 Statistical Bias ExamplesReviewed by Chris Drew (PhD)

This article was peer-reviewed and edited by Chris Drew (PhD). The review process on Helpful Professor involves having a PhD level expert fact check, edit, and contribute to articles. Reviewers ensure all content reflects expert academic consensus and is backed up with reference to academic studies. Dr. Drew has published over 20 academic articles in scholarly journals. He is the former editor of the Journal of Learning Development in Higher Education and holds a PhD in Education from ACU.

statistical bias examples and definition, explained below

Statistical bias refers to an error that has caused the sample to not represent the population. This error means the sample data is different from the target population under study. There are numerous types of statistical bias.

When relying on a sample to make estimates regarding the population, there are numerous issues that can cause the sample to be flawed.

Examples of statistical biases include sampling, response, non-response, self-selection, and measurement biases.

Statistical Bias Examples

  • A university researcher studies the students in his class and then makes inferences regarding human behavior, failing to account for the fact students don’t necessarily represent the general population. – Sampling Bias
  • Respondents to an environmental habits survey give answers that will make them look favorably rather than providing honest answers. – Response Bias
  • Over 80% of registered voters refused to participate in a political survey, causing the survey to not truly reflect voters’ opinions. – Non-response Bias  
  • A product designer solicits customers to test a new App interface but only middle-income office workers volunteer, meaning other types of people haven’t been tested using the app. Self-selection Bias  
  • A sports medicine researcher forgot to calibrate their scale so it consistently underestimates the weight of the study’s participants by 0.3 lbs. – Measurement Bias  
  • A criminologist studies the causal factors of crime in one ethnic group in one neighborhood of one large city and generalizes the findings to other populations, not recognizing that different neighborhoods and ethnic groups may generate different results. – Sampling Bias
  • A survey on a controversial social issue contains a lot of leading questions that suggest the desired responses, leading to skewed responses. – Response Bias
  • Most people refuse to participate in a survey regarding a controversial social issue, meaning you can’t get a true population sample. – Non-response Bias
  • A field study in a shopping mall on fast-food involves mostly overweight individuals who have different eating habits to the general population who frequent the mall. – Self-selection Bias  
  • The observers rating the quality of interactions between parent and child are not well-trained and make a lot of faulty evaluations. – Measurement Bias  

Some Examples in Detail

1. Self-Selection Bias

When soliciting volunteers for a study the researcher has to be very concerned about self-selection. The people that want to participate may be vastly different than the target population under study.

People have a tendency to participate in surveys regarding issues they are against. It is an opportunity to express a negative opinion on an issue that is upsetting or which they have a negative attitude about.

This can involve controversial issues such as taxes or social programs, or even relatively mundane topics like customer satisfaction.

For example, some restaurants will place small surveys on their tables. Unfortunately for the staff, unhappy customers are a lot more likely to fill-out those surveys than satisfied customers.

This will skew the results and make it seem that there is a much higher percentage of unsatisfied customers than is actually the case.

2. Sampling Bias

Sampling bias is usually the result of a faulty sampling procedure. Ideally, researchers should employ complete random selection of people to participate in their study. This means that every person in the population has an equal chance of being selected.

Complete random selection is sometimes just not possible, usually for very practical reasons. For example, if a political polling firm wants to know what “the public” thinks regarding a specific issue, the sample should be quite large.

However, even large-scale surveys may have only a thousand respondents. When the population of a country is over 300 million, one has to wonder if the sample is truly representative or not.

With such a small sample, there is a good chance that it contains a disproportionate number of people of a particular demographic. Therefore, the sample is biased.

3. Non-response Bias

This type of bias occurs when people do not participate in a study. If results are to be generalized to a specific population, but a high percentage of people refuse to participate, then we have non-response bias.

As Berg (2005, p. 865) explained:

“When individuals from a special subset of the population are systematically omitted from a particular sample, however, the sample cannot be said to be random in the sense that every member of the population is equally likely to be included in the sample.”

This type of bias is a common issue with phone surveys. Calling 1,000 people may produce a response rate under 50%. That is a far cry from the target population numbers and means the results may be quite skewed.

It also leads to another kind of statistical bias called self-selection bias. Yes, a single study can contain multiple types of biases.

Longitudinal studies also have trouble with non-response bias due to drop-out rates. Trying to track participants for years, maybe even decades, can be quite problematic. People move, lose interest, or sometimes pass away. These all damage the quality of the research sample.

4. Response Bias

A response bias is when participants give untrue responses. That can mean giving answers that make themselves look good, or could be the result of leading questions.

For example, even though participants may be told that the survey is anonymous and they are instructed not to put their name anywhere, a lot of people are still a little weary.

So, they want to make themselves look good. That means they are going to respond to the questions in way that presents themselves favorably.

Another example is when the survey contains leading questions. These are questions that are phrased in such a way as to suggest a particular response. 

Either way, the researcher ends up with data that is not accurate.

5. Measurement Bias

Often referred to as measurement error, this points to the issue of how accurate are the measurement tools used in science. If a faulty measurement is used then can we really have confidence in the validity of the results?

“Every psychological variable yet studied has been found to be imperfectly measured, as is true throughout all other areas of science” (Schmidt & Hunter, 1996, p. 199).

Measurement bias is a much bigger concern in the social sciences than in the hard sciences such as chemistry and physics. Psychologists and sociologists have to rely on measurement tools like surveys or observations from people. There is a long list of issues that create measurement error in those methods.

However, physicists and chemists have high-tech tools that measure phenomena to a level of precision that is quite accurate.

Conclusion

Statistical biases create numerous issues regarding internal and external validity. If the results of a study are inaccurate due to a sampling bias or a high degree of measurement error, then the data is not valid. If the data is not valid, then it can’t be generalized to the broader population.

This is a bit of a quagmire. Scientists spend a lot of time collecting data, sometimes it can take years, even decades to complete a study. If the study contains statistical bias, then all of that effort is called into question.

Conclusions regarding human behavior can’t be made and certainly policy decisions should not be implemented.

References

Berg, N. (2005). Non-response bias. In K. Kempf-Leonard (Ed.), Encyclopedia of Social Measurement (pp. 865-873). Elsevier.

Schmidt, F. L., & Hunter, J. E. (1996). Measurement error in psychological research: Lessons from 26 research scenarios. Psychological Methods, 1(2), 199-223. https://doi.org/10.1037//1082-989X.1.2.199

Shapiro, R. Y. (2001). Polling. In N. J. Smelser & P. B. Baltes (Eds.), International Encyclopedia of the Social & Behavioral Sciences (pp. 11719-11723). Pergamon.

Sinclair, W. S., & Morley, R. W. (2011). Statistical bias problem in on-site surveys: The severity of the problem and its potential for solution. Journal of the Fisheries Research Board of Canada, 32(12), 2520-2524. https://doi.org/10.1139/f75-291

Website | + posts

Dr. Cornell has worked in education for more than 20 years. His work has involved designing teacher certification for Trinity College in London and in-service training for state governments in the United States. He has trained kindergarten teachers in 8 countries and helped businessmen and women open baby centers and kindergartens in 3 countries.

Website | + posts

This article was peer-reviewed and edited by Chris Drew (PhD). The review process on Helpful Professor involves having a PhD level expert fact check, edit, and contribute to articles. Reviewers ensure all content reflects expert academic consensus and is backed up with reference to academic studies. Dr. Drew has published over 20 academic articles in scholarly journals. He is the former editor of the Journal of Learning Development in Higher Education and holds a PhD in Education from ACU.

Leave a Comment

Your email address will not be published. Required fields are marked *