Loading [Contrib]/a11y/accessibility-menu.js
Skip to main content
Survey Practice
  • Menu
  • Articles
    • Articles
    • Editor Notes
    • In-Brief Notes
    • Interview the Expert
    • Recent Books, Papers, and Presentations
    • All
  • For Authors
  • Editorial Board
  • About
  • Issues
  • Blog
  • Subscribe
  • search

RSS Feed

Enter the URL below into your favorite RSS reader.

http://localhost:57280/feed
Articles
Vol. 17, 2024October 17, 2024 EDT

Psychometric adequacy of a survey attitude scale

Kenneth Wallen,
survey attitudemeasurementpsychometricsvalidityreliabilityscale development
https://doi.org/10.29115/SP-2024-0016
Photo by Scott Graham on Unsplash
Survey Practice
Wallen, Kenneth. 2024. “Psychometric Adequacy of a Survey Attitude Scale.” Survey Practice 17 (October). https:/​/​doi.org/​10.29115/​SP-2024-0016.
Save article as...▾
Download all (2)
  • Figure 1
    Download
  • Supplemental materials
    Download

Sorry, something went wrong. Please try again.

If this problem reoccurs, please contact Scholastica Support

Error message:

undefined

View more stats

Abstract

As attitude toward surveys contributes to nonresponse, this exploratory study assesses the psychometric adequacy and predictive validity of a survey attitude scale among a target population in the United States. Findings suggest respondents hold a positive attitude toward surveys, in terms of value and enjoyment, and an assessment of the scale’s psychometric properties confirmed acceptable reliability and validity metrics that align with previous assessments. Overall, the results suggest a survey attitude scale can serve as a useful metric and accurate predictor of survey response, which contributes to ongoing discussions of nonresponse and the contemporary survey climate among survey practitioners.

Introduction

For survey practice, interrelated challenges of the contemporary survey climate like mode efficacy, communication preferences, and availability of technology affect response quality, nonresponse error, and perceptions of survey burden (Loosveldt and Joye 2016). For instance, access to and use of web-based survey modes have increased as (a) broadband, cellular, and wireless networks expand, (b) email becomes a ubiquitous contact mode, and (c) low-cost, user-friendly survey software becomes more popular (National Research Council 2013). The proliferation of email and web-based surveys has also led to a tendency to make survey requests brief, with minimal explanations about the survey’s purpose, how the results will be used, or the implications for participants (Dillman 2016). In contrast, tenets of social exchange theory, as applied to survey practice, assume people are more likely to respond when they are aware of the survey’s value and perceive less burden because of that value (Dillman, Smyth, and Christian 2014).

Given continued nonresponse concerns among survey practitioners, a simple question is what people’s general attitude toward surveys are, i.e., do they see value in completing a survey, enjoy the survey experience, or feel overburdened with survey requests (Loosveldt and Storms 2008). To address this question, de Leeuw et al. (2019) developed a survey attitude scale to assess dimensions of enjoyment, value, and burden. Initial instrument validation revealed adequate psychometric properties and indicated higher enjoyment and value, and lower burden, were predictive of higher response rates. To complement that initial validation, this study assesses the psychometric adequacy of the survey attitude scale via established reliability and validity criteria among a commonly sampled U.S. population.

Methods

Participants

In the United States, the outdoor recreation industry and its users, including hunters and anglers, are a common target population (Fish and Wildlife Service 2023). This study utilized a target population of Idaho resident hunters and a sample frame of tag purchasers in 2021 who provided an email contact drawn from the Idaho Department of Fish and Game’s (IDFG) license database.

Materials

Assessment of participants’ survey attitude was based on a three-dimensional instrument developed by de Leeuw et al. (2019) that consists of nine items organized into three subscales: survey enjoyment (e1-e3), survey value (v1-v3), and survey burden (b1-b3) (Table 1). The question stem read, “Thinking about the surveys that you are asked to participate in, please indicate your level of agreement with the following statements” and was measured on a 5-point bipolar response scale from strongly disagree (1) to strongly agree (5).

Table 1.Survey attitude scale item mean (M), standard deviation (SD), standardized factor loading (λ), squared multiple correlation (SMC), Cronbach’s α, McDonald’s ω, composite reliability (CR), and average variance extracted (AVE) (n = 6,235).
M SD λ SMC
Survey enjoyment (α = .85, ω = .85, CR = .75, AVE = .50) 3.3 0.8
I enjoy responding to surveys (e1) 3.2 0.9 0.72 0.52
I enjoy being asked to complete a survey (e2) 3.3 0.9 0.72 0.52
Surveys are interesting (e3) 3.4 0.8 0.67 0.45
Survey value (α = .79, ω = .79, CR = .67, AVE = .41) 3.6 0.7
Surveys are important for society (v1) 3.6 0.9 0.63 0.40
A lot can be learned from surveys (v2) 3.8 0.8 0.59 0.35
Completing surveys is a valuable use of my time (v3) 3.4 0.9 0.70 0.48
Survey burden (α = .73, ω = .74, CR = .79, AVE = .55) 2.8 0.9
I get too many requests to do surveys (b1) 2.9 1.0 0.86 0.73
Surveys are an invasion of privacy (b2) 2.8 1.2 0.75 0.57
It is tiring to answer a lot of survey questions (b3) 3.0 1.0 0.60 0.36

Note: χ2 = 224.12, df = 24, CFI = .99, TLI = .98, RMSEA = .04, SRMR = .04

Procedure

In February 2021, survey invitations were emailed to a probability sample of 41,058 participants. Survey design and contact utilized social exchange tenets (Dillman, Smyth, and Christian 2014), i.e., stated the survey’s purpose, how respondent’s data would be used, and was sponsored by legitimate public organizations, the University of Idaho and IDFG. Survey invitations were sent via Granicus’ GovDelivery system, and respondents completed a web-based questionnaire hosted by Qualtrics. Reminders were sent at four-day intervals and the survey effort ended after 30-days. Following this effort, a nonresponse check mailed a shortened hardcopy questionnaire and prepaid return postage envelope to a probability sample of 3,000 participants who did not respond to the initial effort (i.e., we refer to this as the nonresponse sample below).

Analysis

To assess psychometric adequacy, analytical procedures follow de Leeuw et al. (2019) and established methods of scale measurement, validity, and reliability (DeVellis 2016; Kyle et al. 2020; Netemeyer, Bearden, and Sharma 2003). Reliability was assessed via (a) McDonald’s coefficient omega (> 0.7), (b) Cronbach’s coefficient alpha (> 0.7), and (c) composite reliability (CR) (> 0.7). Construct validity was assessed via a three-factor confirmatory factor analysis with a maximum likelihood estimator and established fit indicators and criteria: CFI (> 0.9), TLI (> 0.9), and RMSEA (< 0.08). Convergent validity was assessed via (a) significance of factor loadings (p < .001), (b) strength of standardized factor loadings (> .707), (c) squared multiple correlations (SMC; > .5), and (d) average variance extracted (AVE; > .5). Discriminant validity was assessed via (a) AVE greater than squared latent correlations, (b) AVE square root greater than latent variable correlations, (c) confidence intervals of latent variable correlations not including 1.0, and (d) SMCs less than AVE. Criterion validity (predictive validity) was assessed via (a) correlation between subscales and response and (b) logistic regression of survey attitude subscales (independent variables) on survey response (dependent variable). Statistical analyses were conducted in Mplus 8.2 and JASP 0.19.

Results

Survey Response

The sample consisted of 6,235 usable responses (respondents were 93% male, 93% white, and averaged 52 years of age, which reflect the demographics of the target population), resulting in an effective response rate of 20% and a ±2% estimate margin of sampling error. Mean composite scores for each subscale were: 3.3 (enjoyment), 3.6 (value), and 2.8 (burden) (Table 1). The nonresponse sample consisted of 1,011 usable responses and mean composite scores of 3.1 (enjoyment), 3.5 (value), and 3.2 (burden) (Table 2). Further information on responses for each item can be found in Supplementary File Table 1 and 2.

Table 2.Nonresponse survey attitude scale item mean (M), standard deviation (SD), standardized factor loading (λ), squared multiple correlation (SMC), Cronbach’s α, McDonald’s ω, composite reliability (CR), and average variance extracted (AVE) (n = 1,011).
M SD λ SMC
Survey enjoyment (α = .92, ω = .92, CR = .93, AVE = .82) 3.1 0.9
I enjoy responding to surveys (e1) 3.0 1.0 0.92 0.92
I enjoy being asked to complete a survey (e2) 3.0 1.0 0.92 0.92
Surveys are interesting (e3) 3.2 1.0 0.86 0.86
Survey value (α = .86, ω = .86, CR = .84, AVE = .64) 3.5 0.9
Surveys are important for society (v1) 3.6 1.0 0.77 0.77
A lot can be learned from surveys (v2) 3.8 0.9 0.69 0.69
Completing surveys is a valuable use of my time (v3) 3.2 1.0 0.90 0.90
Survey burden (α = .74, ω = .74, CR = .74, AVE = .48) 3.2 0.8
I get too many requests to do surveys (b1) 3.0 1.0 0.71 0.71
Surveys are an invasion of privacy (b2) 3.6 0.9 0.62 0.62
It is tiring to answer a lot of survey questions (b3) 3.1 1.0 0.74 0.74

Note: χ2 = 47.95, df = 24, CFI = .99, TLI = .99, RMSEA = .03, SRMR = .04.

Reliability

Indicators of congeneric reliability (McDonald’s ω) were adequate for all subscales: .85 (enjoyment), .79 (value), and .74 (burden) (Table 1). Indicators of internal consistency (Cronbach’s α) were similarly adequate: .85 (enjoyment), .79 (value), and .73 (burden). Indicators of composite reliability (CR) were adequate for survey enjoyment (.75) and survey burden (.79) and marginal for survey value (.67).

Validity

Confirmatory factor model parameters based on de Leeuw et al. (2019) indicate acceptable fit indices and construct validity: χ2 = 224.12, df = 24, CFI = .99, TLI = .98, RMSEA = .04, SRMR = .04. Indicators of convergent validity were variably adequate: (a) all factor loadings (λ) were significant, (b) the majority were greater than the established threshold of 0.707 and all were greater than an acceptable threshold of 0.4, (c) analogous patterns of squared multiple correlations (SMC) greater than 0.5, and (d) average variance extracted (AVE) greater than 0.5 for survey enjoyment (0.50) and survey burden (0.55), though survey value (0.41) was below the established threshold (Table 1). Indicators of discriminant validity were variably adequate: (a) comparison of AVE to the squared latent variable correlations (r2) and (b) comparison of the AVE square root (√AVE) to latent variable correlations (r) indicate survey value and survey burden items were marginally supported but the adequacy of those criteria are less convincing when applied to the survey enjoyment subscale (Table 3). Discriminant validity was also marginally supported by (c) confidence intervals for survey value and survey burden construct correlations not including 1.0 and (d) the observation of some item SMCs less than AVE within each subscale (Table 3).

Table 3.Comparison of average variance extracted (AVE), squared latent correlations (r2), AVE square root (√AVE), latent correlations (r), confidence intervals of latent variable correlations (rCI), and squared multiple correlations (SMC) to assess discriminant validity.
AVE r2 √AVE r rCI SMC
Survey Enjoyment 0.50 0.712 0.70 0.84 0.920 – 1.009 0.52, 0.52, 0.45
Survey Value 0.41 0.001 0.64 -0.04 -0.044 – 0.000 0.40, 0.35, 0.48
Survey Burden 0.55 0.001 0.74 -0.03 -0.057 – 0.004 0.73, 0.57, 0.36

Criterion validity (predictive validity) was assessed via (a) correlation between subscales and response and (b) logistic regression of survey attitude subscales (independent variables) on survey response (dependent variable). Based on Spearman’s rho and Pearson’s point-biserial correlations, we observed the expected direction of relationship between response and survey enjoyment (rs = .08, rpb = .10) and survey burden (rs = -.16, rpb = -.15); the correlation between response and survey value was near zero (rs = .01, rpb = .04) (Figure 1). Based on the pooled sample of respondents and nonrespondents, a logistic regression of survey enjoyment, value, and burden composite scores predicted survey response (χ2 = 288.91, p < .001; βEnjoy = .45, βValue = -.15, βBurden = -.51) (Table 4). Given a unit change in survey enjoyment, the odds of responding are expected to increase by nearly a factor of 2 (OREnjoy = 1.75, 95%CI [.45, .67]), whereas survey value lowers the odds of responding, approaching no effect (ORValue = .82, 95%CI [-.33, -.08], and a unit change in survey burden decreases the odds of response (ORBurden = .55, 95%CI [-.67, -.50]). Further information on validity tests can be found in Supplementary File Tables 3-6 and Figure 1.

Figure 1
Figure 1
Table 4.Logistic regression of survey attitude subscales and survey response to assess predictive validity.
Β SE β OR z CI lower CI upper
Survey enjoyment 0.562 0.057 0.447 1.754 9.826 0.450 0.674
Survey value -0.203 0.062 -0.150 0.817 -3.254 -0.325 -0.081
Survey burden -0.589 0.043 -0.506 0.555 -13.608 -0.674 -0.504

Discussion

The main contribution of this study is an assessment of the psychometric adequacy of a general survey attitude scale to inform discussions of unit-level nonresponse, data quality, and, more broadly, the contemporary survey climate. Similar to de Leeuw et al. (2019), our results suggest acceptable reliability of the survey attitude instrument and subscales (particularly in consideration of the number of items and threshold conventions). Construct validity was also adequately established, which, likewise, aligns with de Leeuw et al.'s (2019) findings of a three-factor having acceptable fit indices (and we similarly observed indicators of e3 cross-loading onto value and v3 cross-loading onto burden). In our interpretation, these results stem from the content validity of the scale items and their basis in established theory and practical evidence. Of note, we did not test a model that would support the combination of the three subscales into a single summated score.

Convergent and discriminant validity were variably adequate and provide limited support in this regard. Except for burden, adequate convergent validity for enjoyment and value was observed based on AVE and CR. However, the criteria for both forms of validity vary by indicator, scale size, and disciplinary conventions. Importantly, criterion validity provided support for the predictive ability of the survey attitude subscales to predict response. For survey practice, establishing this form of validity may be most consequential and bolster less certain indicators of convergent and discriminant validity.

Next steps to further establish psychometric adequacy would be additional replications, particularly efforts that vary the target populations among general and context-specific publics (e.g., voters, recreationists, customers), probability and panel samples, and efforts that emphasize cross-cultural elements. In terms of predictive validity, replication of de Leeuw et al.'s (2019) ‘willingness to be surveyed again’ question would add an additional layer of criterion validity, which was an inadvertent omission of this study.

In conclusion, survey practitioners may find that the incorporation of a validated survey attitude scale into their work provides practical insights relevant to the contemporary survey climate, including issues of mode, burden, fatigue, and panel attrition. With more actionable insights in mind, the continued validation of the survey attitude scale (particularly, criterion validity) may likewise aid survey practitioners better frame their participant recruitment and invitation strategies (similar to the premise of social exchange theory). Moreover, in the tradition of “surveys on surveys” research, a validated multidimensional survey attitude scale contributes an essential line of inquiry useful to academic and industry survey practitioners.


Acknowledgements

We thank all respondents and the Idaho Department of Fish and Game for facilitating data collection. We thank members of the USDA WERA-1010 committee, Don A. Dillman, Judith de Leeuw, and Joop Hox for their helpful insights. We also thank the editor and reviewer for their professionalism and valuable comments that improved the quality of the manuscript.

Lead author’s contact information

Kenneth E. Wallen, Ph.D., Department of Natural Resources and Society, University of Idaho, 875 Perimeter Dr., Moscow, ID 83844, USA

Email: kwallen@uidaho.edu

Submitted: August 03, 2024 EDT

Accepted: September 26, 2024 EDT

References

de Leeuw, Edith, Joop Hox, Henning Silber, Bella Struminskaya, and Corrie Vis. 2019. “Development of an International Survey Attitude Scale: Measurement Equivalence, Reliability, and Predictive Validity.” Measurement Instruments for the Social Sciences 1:1–10. https:/​/​doi.org/​10.1186/​s42409-019-0012-x.
Google Scholar
DeVellis, Robert F. 2016. Scale Development: Theory and Applications. Thousand Oaks: Sage.
Google Scholar
Dillman, Don A. 2016. “Moving Survey Methodology Forward in Our Rapidly Changing World: A Commentary.” Journal of Rural Social Sciences 31 (3): 160–74.
Google Scholar
Dillman, Don A., Jolene D. Smyth, and Leah M. Christian. 2014. Internet, Phone, Mail, and Mixed-Mode Surveys: The Tailored Design Method. NJ: Wiley. https:/​/​doi.org/​10.1002/​9781394260645.
Google Scholar
Fish and Wildlife Service. 2023. “2022 National Survey of Fishing, Hunting, and Wildlife-Associated Recreation.” Washington, D.C.: Department of the Interior.
Kyle, Gerard, Adam Landon, Jerry Vaske, and Kenneth Wallen. 2020. “Tools for Assessing the Psychometric Adequacy of Latent Variables in Conservation Research.” Conservation Biology 34 (1): 1353–63. https:/​/​doi.org/​10.1111/​cobi.13625.
Google Scholar
Loosveldt, Geert, and Dominique Joye. 2016. “Defining and Assessing Survey Climate.” In The Sage Handbook of Survey Methodology, edited by C. Wolf, D. Joye, T. W. Smith, and Y. Fu, 67–76. Thousand Oaks: Sage. https:/​/​doi.org/​10.4135/​9781473957893.n6.
Google Scholar
Loosveldt, Geert, and Vicky Storms. 2008. “Measuring Public Opinions About Surveys.” International Journal of Public Opinion Research 20:74–89. https:/​/​doi.org/​10.1093/​ijpor/​edn006.
Google Scholar
National Research Council. 2013. Nonresponse in Social Science Surveys: A Research Agenda. Washington, D.C.: National Academies Press.
Google Scholar
Netemeyer, Richard G., William O. Bearden, and Subhash Sharma. 2003. Scaling Procedures: Issues and Applications. Thousand Oaks: Sage. https:/​/​doi.org/​10.4135/​9781412985772.
Google Scholar

This website uses cookies

We use cookies to enhance your experience and support COUNTER Metrics for transparent reporting of readership statistics. Cookie data is not sold to third parties or used for marketing purposes.

Powered by Scholastica, the modern academic journal management system