15 Discriminant Validity Examples

discriminant validity examples and definition, explained below

Discriminant validity is a way of validifying research that involves demonstrating that one scale is unrelated to other scales. It helps researchers to discriminate between two scales.

The term “discriminant validity” was first used by Campbell and Fiske (1959) and more clearly defined later as a “…test [should] not correlate too highly with measures from which it is supposed to differ” (Campbell, 1960, p. 548).  

The main idea is that if two scales are measuring two very distinct constructs, then scores on one should not be related to scores on the other. This is usually assessed by administering both scales to a large sample, and then calculating the correlation between the two.

If the correlation is close to 0, then the two scales are measuring two different things. However, it the correlation is close to 1, then it means the two scales are measuring similar constructs, and therefore there is no discriminant validity.

Discriminant validity is one of two main ways to establish construct validity (the other being convergent validity).

Quick Discriminant Validity Examples

  • Short-Term Memory vs Long-Term Memory Scales: Conducting a test on the accuracy of short-term memory and scores on a test of long-term memory. If they correlate, there may be low discriminant validity and the researchers are conflating the two types of memory.
  • Math vs English Tests: Calculating the correlation between scores on a math exam and knowledge of English literature (correlation should be zero!).
  • Conservatism vs Liberalism Political Quizzes: Administering two very distinct measures of political orientation, conservatism and liberalism, to the same sample of voters and correlating the scores. Conservatives should have low correlation to liberals on key metrics because they’re ideally measuring different values.
  • Tests of Different Intelligences: Calculating the correlation between spatial intelligence and verbal intelligence tests. Ideally, each test would not correlate because they are supposed to test two different constructs.
  • Narcissism vs Agreeableness Scales: Administering one personality inventory that measures narcissism and another one that measures agreeableness (Strelan, 2007)
  • Nervousness vs Favorite Food Scales: Assessing the correlation between nervousness and favorite food
  • Self-Esteem vs Musical Preferences Scales: Measuring the self-esteem of teenagers and musical preferences.
  • Correlating the scores on a personality test of neuroticism and knowledge of world geography
  • Task Orientation Leadership vs Concern Leadership: Comparing scores on a leadership scale that measures task orientation with another scale that measures concern for others.
  • Social Skills vs Computer Skills: Assessing the degree of relationship between a measure of social skills and a measure of computer skills.

Detailed Examples

1. The Social Desirability Scale vs Other Personality Scales

This scale measures with how concerned a person is with creating a favorable impression. Some people have a tendency to go to great lengths to present themselves in an overly positive manner.

A study by Stöber (2001) involved administering the scale to over 400 participants ages 18-89 years old. Several other personality scales were also administered.

The results indicated that scores on the Social Desirability Scale “…showed nonsignificant correlations with neuroticism, extraversion, psychoticism, and openness to experience…” (p. 222).

The scale achieved discriminant validity because it did not correlate well with other personality scales in which it is theoretically distinct.

For example, wanting to create a favorable impression is a theoretically distinct construct from psychoticism and neuroticism. Therefore, scores on these measures should not be highly correlated with social desirability, which is exactly what the data revealed.

2. Social Skills vs Empathy (Using Confirmatory Factor Analysis)

Confirmatory factor analysis (CFA) is a statistical procedure that will allow a researcher to assess discriminant validity within a multi-dimensional scale.  

For example, let’s suppose that researchers have developed a scale that measures emotional intelligence. They define emotional intelligence as consisting of five dimensions: self-awareness, self-regulation, social skills, empathy, and motivation.

The researchers generate 10 questions for each dimension and then administer the scale to a large sample of office workers. After collecting all of the questionnaires, the data are put into the SPSS statistics program. A CFA is performed and the results show that all of the self-awareness questions are highly correlated with each other, but less correlated with the other four dimensions.

This pattern holds true for each dimension. That is, questions designed to measure one dimension were all correlated with each other, but far less correlated with the other dimensions, which measure different constructs.

Using CFA is another way of assessing discriminant validity because it can determine if one construct is related or unrelated to a different construct. The more dissimilar the constructs, the lower the corrections between the dimensions.

3. Hedonic vs Psychological vs Social Well-Being (The Tripartite Model)

This is a multi-dimensional model of mental well-being which postulates that well-being is comprised of three distinct domains: hedonic, psychological, and social. To understand a person’s perception of their well-being, a holistic approach that takes into account all three dimensions is necessary.

In one large study, Joshanloo (2019) administered a scale of mental well-being to an American sample of over 2,700 participants. The researchers then analyzed the data using Confirmatory Factor Analysis (CFA). The results revealed that questions on the scale regarding perceptions of hedonic well-being were more related to each other than with questions regarding the other two dimensions (i.e., psychological and social).

This pattern held true for all three dimensions. Questions about psychological well-being were all correlated with each other, while less correlated with the other two dimensions. In addition, the questions regarding social well-being were correlated with each other more so than they were correlated with the other dimensions.

So, the scale has discriminant validity because dimensions of the scale that should not be highly related, were in fact, not highly related. Similar patterns have been found in cross-cultural studies (Joshanloo & Niknam, 2019).

4. APS vs SWLS Scales (Anger vs Satisfaction with Life)

Since the definition of discriminant validity involves demonstrating that two constructs are different, there may be no better comparison than anger and satisfaction with life.

This exact comparison was made in a study conducted from researchers in Mexico and Peru (Cadena et al., 2018). Several personality questionnaires were placed on online. One was the Garcia Anger Proneness Scale (APS) and the other was the Satisfaction with Life Scale (SWLS). Over 400 respondents filled-out the scales, ranging in age from 18 to 65 years old.

The results revealed a small, negative correlation between the APS scores and the SWLS scores. This provides evidence of discriminant validity for the APS scale because it was not highly correlated with a scale which measures a distinct construct.  

5. School Readiness vs Social Competence Scales

When young children enter first grade they need to be fully-prepared to take that next step in their academic journey. At the very minimum, that means being able to count to 20 and read some basic words.

These skills are not however, related to social competence. Plenty of youngsters have great social skills, get along with their classmates, and can make friends easily. But that really has nothing to do with their basic academic abilities.

Since these two constructs, school readiness and social competence, are unrelated, they could be used to assess discriminant validity.

The study would involve asking teachers to rate the academic abilities and degree of social competence of their students. Of course, there may be issues with the halo effect when teachers are thinking about each child, so we could ask the teachers to make these ratings one week apart.

After that, the analysis is very straightforward. We just put all the data into an SPSS program, click on the various menu options for calculating the correlation, and then wait a few milliseconds.

Conclusion

Discriminant validity is one method of assessing a scale’s degree of construct validity. If the scale is measuring what it is supposed to measure, then it should not be highly correlated with other scales that measure something different.

Researchers in psychology need to have faith in their measures because they can’t rely on the precision of technology like in the hard sciences. Therefore, they go to great lengths to assess the validity and reliability of their measurement tools. This often takes many years and involves refining the scale as results identify deficiencies.

Eventually, researchers have refined and tested the validity and reliability of a given scale to such an extent that the field is confident in the scale’s ability to measure what it is intended to measure.

References

Campbell, D. T. (1960). Recommendations for APA test standards regarding construct, trait, or discriminant validity. American Psychologist, 15(8), 546–553. https://doi.org/10.1037/h0048255

Campbell, D. T., & Fiske, D. W. (1959). Convergent and discriminant validation by the multitrait-multimethod matrix. Psychological bulletin, 56(2), 81.

Cadena, G., González, L. D., Valle, A., Caycho-Rodríguez, T., & López, A. (2018). Construct validity of a new scale for assessing anger proneness (APS-G). Salud Mental, 41(5), 229-236. https://doi.org/10.17711/SM.0185-3325.2018.034

Henseler, J., Ringle, C. M., and Sarstedt, M. (2015). A new criterion for assessing Discriminant validity in variance-based structural equation modeling. Journal of the Academy of Marketing Science, 43(1), 115-135.

Joshanloo, M., Niknam, S. (2019). The Tripartite Model of mental well-neing in Iran: Factorial and discriminant Validity. Current Psychol, 38, 128–133. https://doi.org/10.1007/s12144-017-9595-7

Joshanloo, M. (2019) Structural and discriminant validity of the tripartite model of mental well-being: differential relationships with the big five traits. Journal of Mental Health, 28(2), 168-174. https://doi.org/10.1080/09638237.2017.1370633

Stöber, J. (2001). The Social Desirability Scale-17 (SDS-17): Convergent validity, discriminant validity, and relationship with age. European Journal of Psychological Assessment, 17(3), 222.

Strelan, P. (2007). Who forgives others, themselves, and situations? The roles of narcissism, guilt, self-esteem, and agreeableness. Personality and Individual Differences, 42(2), 259-269. https://doi.org/10.1016/j.paid.2006.06.017

Chris
Website | + posts

Dr. Chris Drew is the founder of the Helpful Professor. He holds a PhD in education and has published over 20 articles in scholarly journals. He is the former editor of the Journal of Learning Development in Higher Education. [Image Descriptor: Photo of Chris]

Leave a Comment

Your email address will not be published. Required fields are marked *